Jan 15 00:30:40.051119 kernel: Linux version 6.12.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 14 22:02:13 -00 2026 Jan 15 00:30:40.051151 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=1042e64ca7212ba2a277cb872bdf1dc4e195c9fb8110078c443b3efbd2488cb9 Jan 15 00:30:40.051165 kernel: BIOS-provided physical RAM map: Jan 15 00:30:40.051172 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 15 00:30:40.051280 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 15 00:30:40.051288 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 15 00:30:40.051296 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Jan 15 00:30:40.051307 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Jan 15 00:30:40.051315 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 15 00:30:40.051322 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 15 00:30:40.051335 kernel: NX (Execute Disable) protection: active Jan 15 00:30:40.051343 kernel: APIC: Static calls initialized Jan 15 00:30:40.051350 kernel: SMBIOS 2.8 present. Jan 15 00:30:40.051358 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Jan 15 00:30:40.051367 kernel: DMI: Memory slots populated: 1/1 Jan 15 00:30:40.051378 kernel: Hypervisor detected: KVM Jan 15 00:30:40.051389 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Jan 15 00:30:40.051397 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 15 00:30:40.051405 kernel: kvm-clock: using sched offset of 3945362585 cycles Jan 15 00:30:40.051414 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 15 00:30:40.051423 kernel: tsc: Detected 2494.138 MHz processor Jan 15 00:30:40.051432 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 15 00:30:40.051444 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 15 00:30:40.051455 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Jan 15 00:30:40.051464 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 15 00:30:40.051473 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 15 00:30:40.051481 kernel: ACPI: Early table checksum verification disabled Jan 15 00:30:40.051490 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Jan 15 00:30:40.051499 kernel: ACPI: RSDT 0x000000007FFE19FD 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 15 00:30:40.051507 kernel: ACPI: FACP 0x000000007FFE17E1 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 15 00:30:40.051518 kernel: ACPI: DSDT 0x000000007FFE0040 0017A1 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 15 00:30:40.051527 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jan 15 00:30:40.051535 kernel: ACPI: APIC 0x000000007FFE1855 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 15 00:30:40.052308 kernel: ACPI: HPET 0x000000007FFE18D5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 15 00:30:40.052322 kernel: ACPI: SRAT 0x000000007FFE190D 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 15 00:30:40.052332 kernel: ACPI: WAET 0x000000007FFE19D5 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 15 00:30:40.052341 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe17e1-0x7ffe1854] Jan 15 00:30:40.052356 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe17e0] Jan 15 00:30:40.052365 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jan 15 00:30:40.052373 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe1855-0x7ffe18d4] Jan 15 00:30:40.052386 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe18d5-0x7ffe190c] Jan 15 00:30:40.052530 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe190d-0x7ffe19d4] Jan 15 00:30:40.052540 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe19d5-0x7ffe19fc] Jan 15 00:30:40.052553 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 15 00:30:40.052562 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 15 00:30:40.052571 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00001000-0x7ffdafff] Jan 15 00:30:40.052580 kernel: NODE_DATA(0) allocated [mem 0x7ffd3dc0-0x7ffdafff] Jan 15 00:30:40.052590 kernel: Zone ranges: Jan 15 00:30:40.052599 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 15 00:30:40.052611 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Jan 15 00:30:40.052620 kernel: Normal empty Jan 15 00:30:40.052629 kernel: Device empty Jan 15 00:30:40.052638 kernel: Movable zone start for each node Jan 15 00:30:40.052647 kernel: Early memory node ranges Jan 15 00:30:40.052660 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 15 00:30:40.052675 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Jan 15 00:30:40.052686 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Jan 15 00:30:40.052704 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 15 00:30:40.052716 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 15 00:30:40.052728 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Jan 15 00:30:40.052746 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 15 00:30:40.052758 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 15 00:30:40.052787 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 15 00:30:40.052800 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 15 00:30:40.052817 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 15 00:30:40.052831 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 15 00:30:40.052847 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 15 00:30:40.052859 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 15 00:30:40.052872 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 15 00:30:40.052886 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 15 00:30:40.052898 kernel: TSC deadline timer available Jan 15 00:30:40.052921 kernel: CPU topo: Max. logical packages: 1 Jan 15 00:30:40.052935 kernel: CPU topo: Max. logical dies: 1 Jan 15 00:30:40.052950 kernel: CPU topo: Max. dies per package: 1 Jan 15 00:30:40.052964 kernel: CPU topo: Max. threads per core: 1 Jan 15 00:30:40.052979 kernel: CPU topo: Num. cores per package: 2 Jan 15 00:30:40.052993 kernel: CPU topo: Num. threads per package: 2 Jan 15 00:30:40.053008 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jan 15 00:30:40.053024 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 15 00:30:40.053044 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Jan 15 00:30:40.053060 kernel: Booting paravirtualized kernel on KVM Jan 15 00:30:40.053077 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 15 00:30:40.053093 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 15 00:30:40.053109 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jan 15 00:30:40.053125 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jan 15 00:30:40.053140 kernel: pcpu-alloc: [0] 0 1 Jan 15 00:30:40.053159 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 15 00:30:40.053191 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=1042e64ca7212ba2a277cb872bdf1dc4e195c9fb8110078c443b3efbd2488cb9 Jan 15 00:30:40.053210 kernel: random: crng init done Jan 15 00:30:40.053225 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 15 00:30:40.053242 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 15 00:30:40.053254 kernel: Fallback order for Node 0: 0 Jan 15 00:30:40.053266 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524153 Jan 15 00:30:40.053283 kernel: Policy zone: DMA32 Jan 15 00:30:40.053296 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 15 00:30:40.053308 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 15 00:30:40.053320 kernel: Kernel/User page tables isolation: enabled Jan 15 00:30:40.053333 kernel: ftrace: allocating 40097 entries in 157 pages Jan 15 00:30:40.053345 kernel: ftrace: allocated 157 pages with 5 groups Jan 15 00:30:40.053357 kernel: Dynamic Preempt: voluntary Jan 15 00:30:40.053374 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 15 00:30:40.053387 kernel: rcu: RCU event tracing is enabled. Jan 15 00:30:40.053400 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 15 00:30:40.053413 kernel: Trampoline variant of Tasks RCU enabled. Jan 15 00:30:40.053426 kernel: Rude variant of Tasks RCU enabled. Jan 15 00:30:40.053438 kernel: Tracing variant of Tasks RCU enabled. Jan 15 00:30:40.053451 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 15 00:30:40.053463 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 15 00:30:40.053481 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 15 00:30:40.053499 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 15 00:30:40.053512 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 15 00:30:40.053524 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 15 00:30:40.053537 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 15 00:30:40.053550 kernel: Console: colour VGA+ 80x25 Jan 15 00:30:40.053563 kernel: printk: legacy console [tty0] enabled Jan 15 00:30:40.053579 kernel: printk: legacy console [ttyS0] enabled Jan 15 00:30:40.053592 kernel: ACPI: Core revision 20240827 Jan 15 00:30:40.053606 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 15 00:30:40.053629 kernel: APIC: Switch to symmetric I/O mode setup Jan 15 00:30:40.053646 kernel: x2apic enabled Jan 15 00:30:40.053659 kernel: APIC: Switched APIC routing to: physical x2apic Jan 15 00:30:40.053672 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 15 00:30:40.053685 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Jan 15 00:30:40.053702 kernel: Calibrating delay loop (skipped) preset value.. 4988.27 BogoMIPS (lpj=2494138) Jan 15 00:30:40.053718 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 15 00:30:40.053732 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 15 00:30:40.053745 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 15 00:30:40.053758 kernel: Spectre V2 : Mitigation: Retpolines Jan 15 00:30:40.053775 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 15 00:30:40.053789 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 15 00:30:40.053802 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 15 00:30:40.053815 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 15 00:30:40.053829 kernel: MDS: Mitigation: Clear CPU buffers Jan 15 00:30:40.053842 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 15 00:30:40.053855 kernel: active return thunk: its_return_thunk Jan 15 00:30:40.053872 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 15 00:30:40.053885 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 15 00:30:40.053899 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 15 00:30:40.053913 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 15 00:30:40.053927 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 15 00:30:40.053940 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 15 00:30:40.053954 kernel: Freeing SMP alternatives memory: 32K Jan 15 00:30:40.053971 kernel: pid_max: default: 32768 minimum: 301 Jan 15 00:30:40.053985 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 15 00:30:40.053998 kernel: landlock: Up and running. Jan 15 00:30:40.054011 kernel: SELinux: Initializing. Jan 15 00:30:40.054025 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 15 00:30:40.054038 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 15 00:30:40.054053 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Jan 15 00:30:40.054071 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Jan 15 00:30:40.054086 kernel: signal: max sigframe size: 1776 Jan 15 00:30:40.054101 kernel: rcu: Hierarchical SRCU implementation. Jan 15 00:30:40.054116 kernel: rcu: Max phase no-delay instances is 400. Jan 15 00:30:40.054131 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 15 00:30:40.054141 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 15 00:30:40.054150 kernel: smp: Bringing up secondary CPUs ... Jan 15 00:30:40.054167 kernel: smpboot: x86: Booting SMP configuration: Jan 15 00:30:40.056705 kernel: .... node #0, CPUs: #1 Jan 15 00:30:40.056725 kernel: smp: Brought up 1 node, 2 CPUs Jan 15 00:30:40.056736 kernel: smpboot: Total of 2 processors activated (9976.55 BogoMIPS) Jan 15 00:30:40.056747 kernel: Memory: 1985340K/2096612K available (14336K kernel code, 2445K rwdata, 29896K rodata, 15432K init, 2608K bss, 106708K reserved, 0K cma-reserved) Jan 15 00:30:40.056757 kernel: devtmpfs: initialized Jan 15 00:30:40.056786 kernel: x86/mm: Memory block size: 128MB Jan 15 00:30:40.056810 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 15 00:30:40.056825 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 15 00:30:40.056839 kernel: pinctrl core: initialized pinctrl subsystem Jan 15 00:30:40.056850 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 15 00:30:40.056861 kernel: audit: initializing netlink subsys (disabled) Jan 15 00:30:40.056871 kernel: audit: type=2000 audit(1768437037.297:1): state=initialized audit_enabled=0 res=1 Jan 15 00:30:40.056881 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 15 00:30:40.056893 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 15 00:30:40.056903 kernel: cpuidle: using governor menu Jan 15 00:30:40.056913 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 15 00:30:40.056923 kernel: dca service started, version 1.12.1 Jan 15 00:30:40.056932 kernel: PCI: Using configuration type 1 for base access Jan 15 00:30:40.056942 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 15 00:30:40.056951 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 15 00:30:40.056963 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 15 00:30:40.056973 kernel: ACPI: Added _OSI(Module Device) Jan 15 00:30:40.056983 kernel: ACPI: Added _OSI(Processor Device) Jan 15 00:30:40.056992 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 15 00:30:40.057002 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 15 00:30:40.057012 kernel: ACPI: Interpreter enabled Jan 15 00:30:40.057021 kernel: ACPI: PM: (supports S0 S5) Jan 15 00:30:40.057030 kernel: ACPI: Using IOAPIC for interrupt routing Jan 15 00:30:40.057043 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 15 00:30:40.057052 kernel: PCI: Using E820 reservations for host bridge windows Jan 15 00:30:40.057062 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 15 00:30:40.057072 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 15 00:30:40.057375 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 15 00:30:40.057523 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 15 00:30:40.057666 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 15 00:30:40.057679 kernel: acpiphp: Slot [3] registered Jan 15 00:30:40.057689 kernel: acpiphp: Slot [4] registered Jan 15 00:30:40.057699 kernel: acpiphp: Slot [5] registered Jan 15 00:30:40.057709 kernel: acpiphp: Slot [6] registered Jan 15 00:30:40.057718 kernel: acpiphp: Slot [7] registered Jan 15 00:30:40.057731 kernel: acpiphp: Slot [8] registered Jan 15 00:30:40.057740 kernel: acpiphp: Slot [9] registered Jan 15 00:30:40.057750 kernel: acpiphp: Slot [10] registered Jan 15 00:30:40.057759 kernel: acpiphp: Slot [11] registered Jan 15 00:30:40.057768 kernel: acpiphp: Slot [12] registered Jan 15 00:30:40.057778 kernel: acpiphp: Slot [13] registered Jan 15 00:30:40.057787 kernel: acpiphp: Slot [14] registered Jan 15 00:30:40.057797 kernel: acpiphp: Slot [15] registered Jan 15 00:30:40.057809 kernel: acpiphp: Slot [16] registered Jan 15 00:30:40.057818 kernel: acpiphp: Slot [17] registered Jan 15 00:30:40.057828 kernel: acpiphp: Slot [18] registered Jan 15 00:30:40.057838 kernel: acpiphp: Slot [19] registered Jan 15 00:30:40.057847 kernel: acpiphp: Slot [20] registered Jan 15 00:30:40.057857 kernel: acpiphp: Slot [21] registered Jan 15 00:30:40.057866 kernel: acpiphp: Slot [22] registered Jan 15 00:30:40.057878 kernel: acpiphp: Slot [23] registered Jan 15 00:30:40.057887 kernel: acpiphp: Slot [24] registered Jan 15 00:30:40.057896 kernel: acpiphp: Slot [25] registered Jan 15 00:30:40.057905 kernel: acpiphp: Slot [26] registered Jan 15 00:30:40.057915 kernel: acpiphp: Slot [27] registered Jan 15 00:30:40.057924 kernel: acpiphp: Slot [28] registered Jan 15 00:30:40.057934 kernel: acpiphp: Slot [29] registered Jan 15 00:30:40.057943 kernel: acpiphp: Slot [30] registered Jan 15 00:30:40.057955 kernel: acpiphp: Slot [31] registered Jan 15 00:30:40.057964 kernel: PCI host bridge to bus 0000:00 Jan 15 00:30:40.058105 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 15 00:30:40.059852 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 15 00:30:40.059997 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 15 00:30:40.060118 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 15 00:30:40.060298 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jan 15 00:30:40.060418 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 15 00:30:40.060574 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Jan 15 00:30:40.060715 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Jan 15 00:30:40.060992 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint Jan 15 00:30:40.061156 kernel: pci 0000:00:01.1: BAR 4 [io 0xc1e0-0xc1ef] Jan 15 00:30:40.063996 kernel: pci 0000:00:01.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Jan 15 00:30:40.064241 kernel: pci 0000:00:01.1: BAR 1 [io 0x03f6]: legacy IDE quirk Jan 15 00:30:40.064392 kernel: pci 0000:00:01.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Jan 15 00:30:40.064593 kernel: pci 0000:00:01.1: BAR 3 [io 0x0376]: legacy IDE quirk Jan 15 00:30:40.064875 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint Jan 15 00:30:40.065035 kernel: pci 0000:00:01.2: BAR 4 [io 0xc180-0xc19f] Jan 15 00:30:40.065302 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Jan 15 00:30:40.065567 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jan 15 00:30:40.065731 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jan 15 00:30:40.067785 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Jan 15 00:30:40.068028 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref] Jan 15 00:30:40.068221 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref] Jan 15 00:30:40.068365 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfebf0000-0xfebf0fff] Jan 15 00:30:40.068502 kernel: pci 0000:00:02.0: ROM [mem 0xfebe0000-0xfebeffff pref] Jan 15 00:30:40.068639 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 15 00:30:40.068807 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 15 00:30:40.068955 kernel: pci 0000:00:03.0: BAR 0 [io 0xc1a0-0xc1bf] Jan 15 00:30:40.069090 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebf1000-0xfebf1fff] Jan 15 00:30:40.071374 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref] Jan 15 00:30:40.071652 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 15 00:30:40.071886 kernel: pci 0000:00:04.0: BAR 0 [io 0xc1c0-0xc1df] Jan 15 00:30:40.072118 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebf2000-0xfebf2fff] Jan 15 00:30:40.072373 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref] Jan 15 00:30:40.072613 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Jan 15 00:30:40.072916 kernel: pci 0000:00:05.0: BAR 0 [io 0xc100-0xc13f] Jan 15 00:30:40.073142 kernel: pci 0000:00:05.0: BAR 1 [mem 0xfebf3000-0xfebf3fff] Jan 15 00:30:40.080477 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref] Jan 15 00:30:40.080703 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 15 00:30:40.080898 kernel: pci 0000:00:06.0: BAR 0 [io 0xc000-0xc07f] Jan 15 00:30:40.081059 kernel: pci 0000:00:06.0: BAR 1 [mem 0xfebf4000-0xfebf4fff] Jan 15 00:30:40.081302 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref] Jan 15 00:30:40.081452 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 15 00:30:40.081614 kernel: pci 0000:00:07.0: BAR 0 [io 0xc080-0xc0ff] Jan 15 00:30:40.081776 kernel: pci 0000:00:07.0: BAR 1 [mem 0xfebf5000-0xfebf5fff] Jan 15 00:30:40.081922 kernel: pci 0000:00:07.0: BAR 4 [mem 0xfe814000-0xfe817fff 64bit pref] Jan 15 00:30:40.082074 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint Jan 15 00:30:40.084608 kernel: pci 0000:00:08.0: BAR 0 [io 0xc140-0xc17f] Jan 15 00:30:40.084916 kernel: pci 0000:00:08.0: BAR 4 [mem 0xfe818000-0xfe81bfff 64bit pref] Jan 15 00:30:40.084945 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 15 00:30:40.084956 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 15 00:30:40.084966 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 15 00:30:40.084976 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 15 00:30:40.084986 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 15 00:30:40.084996 kernel: iommu: Default domain type: Translated Jan 15 00:30:40.085006 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 15 00:30:40.085018 kernel: PCI: Using ACPI for IRQ routing Jan 15 00:30:40.085028 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 15 00:30:40.085038 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 15 00:30:40.085047 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Jan 15 00:30:40.085204 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jan 15 00:30:40.085373 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jan 15 00:30:40.085553 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 15 00:30:40.085575 kernel: vgaarb: loaded Jan 15 00:30:40.085590 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 15 00:30:40.085604 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 15 00:30:40.085618 kernel: clocksource: Switched to clocksource kvm-clock Jan 15 00:30:40.085631 kernel: VFS: Disk quotas dquot_6.6.0 Jan 15 00:30:40.085644 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 15 00:30:40.085659 kernel: pnp: PnP ACPI init Jan 15 00:30:40.085680 kernel: pnp: PnP ACPI: found 4 devices Jan 15 00:30:40.085694 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 15 00:30:40.085707 kernel: NET: Registered PF_INET protocol family Jan 15 00:30:40.085722 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 15 00:30:40.085736 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 15 00:30:40.085752 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 15 00:30:40.085767 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 15 00:30:40.085782 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 15 00:30:40.085792 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 15 00:30:40.085801 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 15 00:30:40.085811 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 15 00:30:40.085821 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 15 00:30:40.085834 kernel: NET: Registered PF_XDP protocol family Jan 15 00:30:40.086020 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 15 00:30:40.086149 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 15 00:30:40.086362 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 15 00:30:40.086499 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 15 00:30:40.086617 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jan 15 00:30:40.086758 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jan 15 00:30:40.086894 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 15 00:30:40.086916 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 15 00:30:40.087050 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x720 took 28772 usecs Jan 15 00:30:40.087064 kernel: PCI: CLS 0 bytes, default 64 Jan 15 00:30:40.087074 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 15 00:30:40.087084 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Jan 15 00:30:40.087094 kernel: Initialise system trusted keyrings Jan 15 00:30:40.087104 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 15 00:30:40.087117 kernel: Key type asymmetric registered Jan 15 00:30:40.087127 kernel: Asymmetric key parser 'x509' registered Jan 15 00:30:40.087137 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 15 00:30:40.087147 kernel: io scheduler mq-deadline registered Jan 15 00:30:40.087156 kernel: io scheduler kyber registered Jan 15 00:30:40.087166 kernel: io scheduler bfq registered Jan 15 00:30:40.089217 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 15 00:30:40.089242 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jan 15 00:30:40.089253 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 15 00:30:40.089263 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 15 00:30:40.089273 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 15 00:30:40.089283 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 15 00:30:40.089293 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 15 00:30:40.089303 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 15 00:30:40.089316 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 15 00:30:40.089503 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 15 00:30:40.089519 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 15 00:30:40.089646 kernel: rtc_cmos 00:03: registered as rtc0 Jan 15 00:30:40.089770 kernel: rtc_cmos 00:03: setting system clock to 2026-01-15T00:30:38 UTC (1768437038) Jan 15 00:30:40.089894 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 15 00:30:40.089910 kernel: intel_pstate: CPU model not supported Jan 15 00:30:40.089920 kernel: NET: Registered PF_INET6 protocol family Jan 15 00:30:40.089930 kernel: Segment Routing with IPv6 Jan 15 00:30:40.089940 kernel: In-situ OAM (IOAM) with IPv6 Jan 15 00:30:40.089950 kernel: NET: Registered PF_PACKET protocol family Jan 15 00:30:40.089960 kernel: Key type dns_resolver registered Jan 15 00:30:40.089969 kernel: IPI shorthand broadcast: enabled Jan 15 00:30:40.089982 kernel: sched_clock: Marking stable (2063005618, 159697497)->(2255447722, -32744607) Jan 15 00:30:40.089991 kernel: registered taskstats version 1 Jan 15 00:30:40.090001 kernel: Loading compiled-in X.509 certificates Jan 15 00:30:40.090012 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.65-flatcar: e8b6753a1cbf8103f5806ce5d59781743c62fae9' Jan 15 00:30:40.090021 kernel: Demotion targets for Node 0: null Jan 15 00:30:40.090031 kernel: Key type .fscrypt registered Jan 15 00:30:40.090040 kernel: Key type fscrypt-provisioning registered Jan 15 00:30:40.090067 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 15 00:30:40.090079 kernel: ima: Allocated hash algorithm: sha1 Jan 15 00:30:40.090089 kernel: ima: No architecture policies found Jan 15 00:30:40.090099 kernel: clk: Disabling unused clocks Jan 15 00:30:40.090109 kernel: Freeing unused kernel image (initmem) memory: 15432K Jan 15 00:30:40.090119 kernel: Write protecting the kernel read-only data: 45056k Jan 15 00:30:40.090129 kernel: Freeing unused kernel image (rodata/data gap) memory: 824K Jan 15 00:30:40.090142 kernel: Run /init as init process Jan 15 00:30:40.090152 kernel: with arguments: Jan 15 00:30:40.090162 kernel: /init Jan 15 00:30:40.090172 kernel: with environment: Jan 15 00:30:40.090206 kernel: HOME=/ Jan 15 00:30:40.090219 kernel: TERM=linux Jan 15 00:30:40.090234 kernel: SCSI subsystem initialized Jan 15 00:30:40.090248 kernel: libata version 3.00 loaded. Jan 15 00:30:40.090459 kernel: ata_piix 0000:00:01.1: version 2.13 Jan 15 00:30:40.090627 kernel: scsi host0: ata_piix Jan 15 00:30:40.090786 kernel: scsi host1: ata_piix Jan 15 00:30:40.090806 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 lpm-pol 0 Jan 15 00:30:40.090822 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 lpm-pol 0 Jan 15 00:30:40.090844 kernel: ACPI: bus type USB registered Jan 15 00:30:40.090860 kernel: usbcore: registered new interface driver usbfs Jan 15 00:30:40.090874 kernel: usbcore: registered new interface driver hub Jan 15 00:30:40.090889 kernel: usbcore: registered new device driver usb Jan 15 00:30:40.091093 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Jan 15 00:30:40.092584 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Jan 15 00:30:40.092802 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Jan 15 00:30:40.092955 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Jan 15 00:30:40.093124 kernel: hub 1-0:1.0: USB hub found Jan 15 00:30:40.095394 kernel: hub 1-0:1.0: 2 ports detected Jan 15 00:30:40.095577 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Jan 15 00:30:40.095711 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 15 00:30:40.095725 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 15 00:30:40.095736 kernel: GPT:16515071 != 125829119 Jan 15 00:30:40.095746 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 15 00:30:40.095756 kernel: GPT:16515071 != 125829119 Jan 15 00:30:40.095766 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 15 00:30:40.095780 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 15 00:30:40.095939 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Jan 15 00:30:40.096072 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Jan 15 00:30:40.096229 kernel: virtio_scsi virtio3: 2/0/0 default/read/poll queues Jan 15 00:30:40.096421 kernel: scsi host2: Virtio SCSI HBA Jan 15 00:30:40.096473 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 15 00:30:40.096485 kernel: device-mapper: uevent: version 1.0.3 Jan 15 00:30:40.096495 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 15 00:30:40.096506 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 15 00:30:40.096516 kernel: raid6: avx2x4 gen() 15528 MB/s Jan 15 00:30:40.096526 kernel: raid6: avx2x2 gen() 15169 MB/s Jan 15 00:30:40.096540 kernel: raid6: avx2x1 gen() 12354 MB/s Jan 15 00:30:40.096556 kernel: raid6: using algorithm avx2x4 gen() 15528 MB/s Jan 15 00:30:40.096566 kernel: raid6: .... xor() 6678 MB/s, rmw enabled Jan 15 00:30:40.096576 kernel: raid6: using avx2x2 recovery algorithm Jan 15 00:30:40.096587 kernel: xor: automatically using best checksumming function avx Jan 15 00:30:40.096597 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 15 00:30:40.096607 kernel: BTRFS: device fsid 1fc5e5ba-2a81-4f9e-b722-a47a3e33c106 devid 1 transid 34 /dev/mapper/usr (253:0) scanned by mount (161) Jan 15 00:30:40.096618 kernel: BTRFS info (device dm-0): first mount of filesystem 1fc5e5ba-2a81-4f9e-b722-a47a3e33c106 Jan 15 00:30:40.096634 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 15 00:30:40.096644 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 15 00:30:40.096657 kernel: BTRFS info (device dm-0): enabling free space tree Jan 15 00:30:40.096667 kernel: loop: module loaded Jan 15 00:30:40.096677 kernel: loop0: detected capacity change from 0 to 100160 Jan 15 00:30:40.096687 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 15 00:30:40.096700 systemd[1]: Successfully made /usr/ read-only. Jan 15 00:30:40.096719 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 15 00:30:40.096730 systemd[1]: Detected virtualization kvm. Jan 15 00:30:40.096741 systemd[1]: Detected architecture x86-64. Jan 15 00:30:40.096751 systemd[1]: Running in initrd. Jan 15 00:30:40.096775 systemd[1]: No hostname configured, using default hostname. Jan 15 00:30:40.096789 systemd[1]: Hostname set to . Jan 15 00:30:40.096806 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 15 00:30:40.096817 systemd[1]: Queued start job for default target initrd.target. Jan 15 00:30:40.096827 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 15 00:30:40.096838 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 15 00:30:40.096848 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 15 00:30:40.096860 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 15 00:30:40.096877 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 15 00:30:40.096888 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 15 00:30:40.096899 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 15 00:30:40.096910 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 15 00:30:40.096921 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 15 00:30:40.096932 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 15 00:30:40.096949 systemd[1]: Reached target paths.target - Path Units. Jan 15 00:30:40.096960 systemd[1]: Reached target slices.target - Slice Units. Jan 15 00:30:40.096970 systemd[1]: Reached target swap.target - Swaps. Jan 15 00:30:40.096981 systemd[1]: Reached target timers.target - Timer Units. Jan 15 00:30:40.096991 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 15 00:30:40.097002 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 15 00:30:40.097013 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 15 00:30:40.097029 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 15 00:30:40.097040 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 15 00:30:40.097051 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 15 00:30:40.097062 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 15 00:30:40.097072 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 15 00:30:40.097083 systemd[1]: Reached target sockets.target - Socket Units. Jan 15 00:30:40.097093 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 15 00:30:40.097110 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 15 00:30:40.097121 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 15 00:30:40.097131 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 15 00:30:40.097143 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 15 00:30:40.097154 systemd[1]: Starting systemd-fsck-usr.service... Jan 15 00:30:40.097164 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 15 00:30:40.098715 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 15 00:30:40.098735 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 00:30:40.098747 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 15 00:30:40.098759 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 15 00:30:40.098791 systemd[1]: Finished systemd-fsck-usr.service. Jan 15 00:30:40.098803 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 15 00:30:40.098858 systemd-journald[298]: Collecting audit messages is enabled. Jan 15 00:30:40.098890 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 15 00:30:40.098902 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 15 00:30:40.098914 kernel: Bridge firewalling registered Jan 15 00:30:40.098925 systemd-journald[298]: Journal started Jan 15 00:30:40.098948 systemd-journald[298]: Runtime Journal (/run/log/journal/ee86592197724cb091658a9f2a3ced19) is 4.8M, max 39.1M, 34.2M free. Jan 15 00:30:40.097939 systemd-modules-load[300]: Inserted module 'br_netfilter' Jan 15 00:30:40.147150 kernel: audit: type=1130 audit(1768437040.141:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:40.147201 systemd[1]: Started systemd-journald.service - Journal Service. Jan 15 00:30:40.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:40.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:40.148209 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 15 00:30:40.156312 kernel: audit: type=1130 audit(1768437040.147:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:40.156346 kernel: audit: type=1130 audit(1768437040.151:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:40.151000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:40.155733 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 00:30:40.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:40.161208 kernel: audit: type=1130 audit(1768437040.156:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:40.161473 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 15 00:30:40.165347 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 15 00:30:40.169426 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 15 00:30:40.172590 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 15 00:30:40.194847 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 15 00:30:40.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:40.197730 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 15 00:30:40.201303 kernel: audit: type=1130 audit(1768437040.196:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:40.203000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:40.203616 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 15 00:30:40.211307 kernel: audit: type=1130 audit(1768437040.203:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:40.211347 kernel: audit: type=1130 audit(1768437040.207:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:40.207000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:40.203666 systemd-tmpfiles[320]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 15 00:30:40.210816 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 15 00:30:40.212000 audit: BPF prog-id=6 op=LOAD Jan 15 00:30:40.215215 kernel: audit: type=1334 audit(1768437040.212:9): prog-id=6 op=LOAD Jan 15 00:30:40.215378 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 15 00:30:40.217308 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 15 00:30:40.223207 kernel: audit: type=1130 audit(1768437040.217:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:40.217000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:40.253042 dracut-cmdline[336]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=1042e64ca7212ba2a277cb872bdf1dc4e195c9fb8110078c443b3efbd2488cb9 Jan 15 00:30:40.293226 systemd-resolved[337]: Positive Trust Anchors: Jan 15 00:30:40.293244 systemd-resolved[337]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 15 00:30:40.293251 systemd-resolved[337]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 15 00:30:40.293309 systemd-resolved[337]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 15 00:30:40.328271 systemd-resolved[337]: Defaulting to hostname 'linux'. Jan 15 00:30:40.330625 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 15 00:30:40.330000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:40.331228 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 15 00:30:40.385217 kernel: Loading iSCSI transport class v2.0-870. Jan 15 00:30:40.400229 kernel: iscsi: registered transport (tcp) Jan 15 00:30:40.425360 kernel: iscsi: registered transport (qla4xxx) Jan 15 00:30:40.425442 kernel: QLogic iSCSI HBA Driver Jan 15 00:30:40.461744 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 15 00:30:40.481701 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 15 00:30:40.483000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:40.484894 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 15 00:30:40.546433 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 15 00:30:40.551000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:40.553586 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 15 00:30:40.554899 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 15 00:30:40.605352 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 15 00:30:40.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:40.607000 audit: BPF prog-id=7 op=LOAD Jan 15 00:30:40.607000 audit: BPF prog-id=8 op=LOAD Jan 15 00:30:40.609162 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 15 00:30:40.643676 systemd-udevd[581]: Using default interface naming scheme 'v257'. Jan 15 00:30:40.656548 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 15 00:30:40.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:40.661009 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 15 00:30:40.700310 dracut-pre-trigger[648]: rd.md=0: removing MD RAID activation Jan 15 00:30:40.701541 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 15 00:30:40.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:40.703000 audit: BPF prog-id=9 op=LOAD Jan 15 00:30:40.705437 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 15 00:30:40.749389 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 15 00:30:40.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:40.752982 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 15 00:30:40.780327 systemd-networkd[688]: lo: Link UP Jan 15 00:30:40.781209 systemd-networkd[688]: lo: Gained carrier Jan 15 00:30:40.783071 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 15 00:30:40.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:40.785784 systemd[1]: Reached target network.target - Network. Jan 15 00:30:40.865168 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 15 00:30:40.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:40.868630 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 15 00:30:41.008626 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 15 00:30:41.040450 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 15 00:30:41.057407 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 15 00:30:41.075637 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 15 00:30:41.078968 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 15 00:30:41.099220 kernel: cryptd: max_cpu_qlen set to 1000 Jan 15 00:30:41.126508 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 15 00:30:41.126685 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 00:30:41.138000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:41.141697 disk-uuid[745]: Primary Header is updated. Jan 15 00:30:41.141697 disk-uuid[745]: Secondary Entries is updated. Jan 15 00:30:41.141697 disk-uuid[745]: Secondary Header is updated. Jan 15 00:30:41.138777 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 00:30:41.152436 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 00:30:41.185214 kernel: AES CTR mode by8 optimization enabled Jan 15 00:30:41.220080 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 15 00:30:41.284235 systemd-networkd[688]: eth1: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 15 00:30:41.286047 systemd-networkd[688]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 15 00:30:41.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:41.289565 systemd-networkd[688]: eth1: Link UP Jan 15 00:30:41.289886 systemd-networkd[688]: eth1: Gained carrier Jan 15 00:30:41.289911 systemd-networkd[688]: eth1: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 15 00:30:41.301872 systemd-networkd[688]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/yy-digitalocean.network Jan 15 00:30:41.301883 systemd-networkd[688]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Jan 15 00:30:41.302675 systemd-networkd[688]: eth0: Link UP Jan 15 00:30:41.303714 systemd-networkd[688]: eth0: Gained carrier Jan 15 00:30:41.303731 systemd-networkd[688]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/yy-digitalocean.network Jan 15 00:30:41.318315 systemd-networkd[688]: eth0: DHCPv4 address 164.92.64.55/20, gateway 164.92.64.1 acquired from 169.254.169.253 Jan 15 00:30:41.343394 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 15 00:30:41.354149 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 15 00:30:41.365000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:41.354851 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 15 00:30:41.355975 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 15 00:30:41.359382 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 15 00:30:41.362901 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 00:30:41.386867 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 15 00:30:41.387000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:42.210270 disk-uuid[747]: Warning: The kernel is still using the old partition table. Jan 15 00:30:42.210270 disk-uuid[747]: The new table will be used at the next reboot or after you Jan 15 00:30:42.210270 disk-uuid[747]: run partprobe(8) or kpartx(8) Jan 15 00:30:42.210270 disk-uuid[747]: The operation has completed successfully. Jan 15 00:30:42.228278 kernel: kauditd_printk_skb: 16 callbacks suppressed Jan 15 00:30:42.228417 kernel: audit: type=1130 audit(1768437042.221:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:42.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:42.220274 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 15 00:30:42.220483 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 15 00:30:42.223958 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 15 00:30:42.221000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:42.236254 kernel: audit: type=1131 audit(1768437042.221:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:42.267213 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (833) Jan 15 00:30:42.270894 kernel: BTRFS info (device vda6): first mount of filesystem 372d586b-dfcb-4c9b-8d15-cc0618567790 Jan 15 00:30:42.270969 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 15 00:30:42.275535 kernel: BTRFS info (device vda6): turning on async discard Jan 15 00:30:42.275617 kernel: BTRFS info (device vda6): enabling free space tree Jan 15 00:30:42.284206 kernel: BTRFS info (device vda6): last unmount of filesystem 372d586b-dfcb-4c9b-8d15-cc0618567790 Jan 15 00:30:42.284695 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 15 00:30:42.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:42.289380 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 15 00:30:42.290652 kernel: audit: type=1130 audit(1768437042.285:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:42.524349 ignition[852]: Ignition 2.22.0 Jan 15 00:30:42.524362 ignition[852]: Stage: fetch-offline Jan 15 00:30:42.524419 ignition[852]: no configs at "/usr/lib/ignition/base.d" Jan 15 00:30:42.524433 ignition[852]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 15 00:30:42.527009 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 15 00:30:42.533137 kernel: audit: type=1130 audit(1768437042.527:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:42.527000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:42.524539 ignition[852]: parsed url from cmdline: "" Jan 15 00:30:42.530391 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 15 00:30:42.524543 ignition[852]: no config URL provided Jan 15 00:30:42.524549 ignition[852]: reading system config file "/usr/lib/ignition/user.ign" Jan 15 00:30:42.524558 ignition[852]: no config at "/usr/lib/ignition/user.ign" Jan 15 00:30:42.524563 ignition[852]: failed to fetch config: resource requires networking Jan 15 00:30:42.525084 ignition[852]: Ignition finished successfully Jan 15 00:30:42.564690 ignition[860]: Ignition 2.22.0 Jan 15 00:30:42.564704 ignition[860]: Stage: fetch Jan 15 00:30:42.564975 ignition[860]: no configs at "/usr/lib/ignition/base.d" Jan 15 00:30:42.564986 ignition[860]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 15 00:30:42.565088 ignition[860]: parsed url from cmdline: "" Jan 15 00:30:42.565092 ignition[860]: no config URL provided Jan 15 00:30:42.565097 ignition[860]: reading system config file "/usr/lib/ignition/user.ign" Jan 15 00:30:42.565105 ignition[860]: no config at "/usr/lib/ignition/user.ign" Jan 15 00:30:42.565135 ignition[860]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Jan 15 00:30:42.581117 ignition[860]: GET result: OK Jan 15 00:30:42.581908 ignition[860]: parsing config with SHA512: 1b76cc89dd52e34c5d1fdae4e5c17b397fce58f93ad43d71ebf03e85e81590b77029b4925a665817e39b41b85ca7bb71c5ace3fb8b2968fee211518138377c9e Jan 15 00:30:42.587078 unknown[860]: fetched base config from "system" Jan 15 00:30:42.587096 unknown[860]: fetched base config from "system" Jan 15 00:30:42.587625 ignition[860]: fetch: fetch complete Jan 15 00:30:42.587113 unknown[860]: fetched user config from "digitalocean" Jan 15 00:30:42.587631 ignition[860]: fetch: fetch passed Jan 15 00:30:42.592568 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 15 00:30:42.593000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:42.587698 ignition[860]: Ignition finished successfully Jan 15 00:30:42.598587 kernel: audit: type=1130 audit(1768437042.593:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:42.595987 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 15 00:30:42.653186 ignition[867]: Ignition 2.22.0 Jan 15 00:30:42.653926 ignition[867]: Stage: kargs Jan 15 00:30:42.654207 ignition[867]: no configs at "/usr/lib/ignition/base.d" Jan 15 00:30:42.654221 ignition[867]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 15 00:30:42.657361 ignition[867]: kargs: kargs passed Jan 15 00:30:42.657835 ignition[867]: Ignition finished successfully Jan 15 00:30:42.659545 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 15 00:30:42.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:42.664385 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 15 00:30:42.667037 kernel: audit: type=1130 audit(1768437042.659:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:42.710763 ignition[874]: Ignition 2.22.0 Jan 15 00:30:42.710781 ignition[874]: Stage: disks Jan 15 00:30:42.711010 ignition[874]: no configs at "/usr/lib/ignition/base.d" Jan 15 00:30:42.711025 ignition[874]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 15 00:30:42.711880 ignition[874]: disks: disks passed Jan 15 00:30:42.711938 ignition[874]: Ignition finished successfully Jan 15 00:30:42.715031 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 15 00:30:42.719997 kernel: audit: type=1130 audit(1768437042.715:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:42.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:42.716599 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 15 00:30:42.720617 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 15 00:30:42.721793 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 15 00:30:42.722888 systemd[1]: Reached target sysinit.target - System Initialization. Jan 15 00:30:42.723855 systemd[1]: Reached target basic.target - Basic System. Jan 15 00:30:42.726705 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 15 00:30:42.766531 systemd-fsck[883]: ROOT: clean, 15/456736 files, 38230/456704 blocks Jan 15 00:30:42.770639 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 15 00:30:42.776895 kernel: audit: type=1130 audit(1768437042.771:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:42.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:42.774329 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 15 00:30:42.904195 kernel: EXT4-fs (vda9): mounted filesystem 6f459a58-5046-4124-bfbc-09321f1e67d8 r/w with ordered data mode. Quota mode: none. Jan 15 00:30:42.904961 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 15 00:30:42.906117 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 15 00:30:42.908910 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 15 00:30:42.911685 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 15 00:30:42.915422 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... Jan 15 00:30:42.924444 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 15 00:30:42.925093 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 15 00:30:42.925144 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 15 00:30:42.935777 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 15 00:30:42.939261 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (891) Jan 15 00:30:42.942507 kernel: BTRFS info (device vda6): first mount of filesystem 372d586b-dfcb-4c9b-8d15-cc0618567790 Jan 15 00:30:42.942592 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 15 00:30:42.952493 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 15 00:30:42.959587 kernel: BTRFS info (device vda6): turning on async discard Jan 15 00:30:42.959664 kernel: BTRFS info (device vda6): enabling free space tree Jan 15 00:30:42.964013 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 15 00:30:43.039090 initrd-setup-root[921]: cut: /sysroot/etc/passwd: No such file or directory Jan 15 00:30:43.055208 initrd-setup-root[928]: cut: /sysroot/etc/group: No such file or directory Jan 15 00:30:43.056877 coreos-metadata[894]: Jan 15 00:30:43.056 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 15 00:30:43.065236 initrd-setup-root[935]: cut: /sysroot/etc/shadow: No such file or directory Jan 15 00:30:43.068665 coreos-metadata[893]: Jan 15 00:30:43.068 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 15 00:30:43.075425 coreos-metadata[894]: Jan 15 00:30:43.075 INFO Fetch successful Jan 15 00:30:43.079220 initrd-setup-root[942]: cut: /sysroot/etc/gshadow: No such file or directory Jan 15 00:30:43.082752 coreos-metadata[894]: Jan 15 00:30:43.082 INFO wrote hostname ci-4515.1.0-n-4ecc98c3fd to /sysroot/etc/hostname Jan 15 00:30:43.084439 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 15 00:30:43.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:43.093205 kernel: audit: type=1130 audit(1768437043.086:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:43.093294 coreos-metadata[893]: Jan 15 00:30:43.092 INFO Fetch successful Jan 15 00:30:43.096657 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. Jan 15 00:30:43.103746 kernel: audit: type=1130 audit(1768437043.097:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-afterburn-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:43.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-afterburn-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:43.097000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-afterburn-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:43.096791 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. Jan 15 00:30:43.105351 systemd-networkd[688]: eth1: Gained IPv6LL Jan 15 00:30:43.105724 systemd-networkd[688]: eth0: Gained IPv6LL Jan 15 00:30:43.223259 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 15 00:30:43.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:43.225754 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 15 00:30:43.227004 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 15 00:30:43.257669 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 15 00:30:43.261229 kernel: BTRFS info (device vda6): last unmount of filesystem 372d586b-dfcb-4c9b-8d15-cc0618567790 Jan 15 00:30:43.278450 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 15 00:30:43.278000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:43.313309 ignition[1012]: INFO : Ignition 2.22.0 Jan 15 00:30:43.314353 ignition[1012]: INFO : Stage: mount Jan 15 00:30:43.315309 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 15 00:30:43.318135 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 15 00:30:43.318135 ignition[1012]: INFO : mount: mount passed Jan 15 00:30:43.318135 ignition[1012]: INFO : Ignition finished successfully Jan 15 00:30:43.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:43.319270 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 15 00:30:43.322430 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 15 00:30:43.350612 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 15 00:30:43.375220 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1023) Jan 15 00:30:43.384654 kernel: BTRFS info (device vda6): first mount of filesystem 372d586b-dfcb-4c9b-8d15-cc0618567790 Jan 15 00:30:43.384805 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 15 00:30:43.390244 kernel: BTRFS info (device vda6): turning on async discard Jan 15 00:30:43.390335 kernel: BTRFS info (device vda6): enabling free space tree Jan 15 00:30:43.392554 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 15 00:30:43.449119 ignition[1040]: INFO : Ignition 2.22.0 Jan 15 00:30:43.450273 ignition[1040]: INFO : Stage: files Jan 15 00:30:43.452209 ignition[1040]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 15 00:30:43.452209 ignition[1040]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 15 00:30:43.456377 ignition[1040]: DEBUG : files: compiled without relabeling support, skipping Jan 15 00:30:43.458511 ignition[1040]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 15 00:30:43.458511 ignition[1040]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 15 00:30:43.464131 ignition[1040]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 15 00:30:43.465149 ignition[1040]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 15 00:30:43.466027 ignition[1040]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 15 00:30:43.465750 unknown[1040]: wrote ssh authorized keys file for user: core Jan 15 00:30:43.468054 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 15 00:30:43.469502 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 15 00:30:43.509875 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 15 00:30:43.682869 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 15 00:30:43.682869 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 15 00:30:43.685085 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 15 00:30:43.685085 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 15 00:30:43.685085 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 15 00:30:43.685085 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 15 00:30:43.685085 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 15 00:30:43.685085 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 15 00:30:43.685085 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 15 00:30:43.685085 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 15 00:30:43.685085 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 15 00:30:43.691773 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 15 00:30:43.691773 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 15 00:30:43.691773 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 15 00:30:43.691773 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 15 00:30:44.204298 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 15 00:30:44.971341 systemd-networkd[688]: eth1: DHCPv4 address 10.124.0.32/20 acquired from 169.254.169.253 Jan 15 00:30:46.493376 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 15 00:30:46.493376 ignition[1040]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 15 00:30:46.495967 ignition[1040]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 15 00:30:46.498199 ignition[1040]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 15 00:30:46.498199 ignition[1040]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 15 00:30:46.498199 ignition[1040]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 15 00:30:46.498199 ignition[1040]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 15 00:30:46.503000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:46.504439 ignition[1040]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 15 00:30:46.504439 ignition[1040]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 15 00:30:46.504439 ignition[1040]: INFO : files: files passed Jan 15 00:30:46.504439 ignition[1040]: INFO : Ignition finished successfully Jan 15 00:30:46.501012 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 15 00:30:46.507459 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 15 00:30:46.509830 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 15 00:30:46.522473 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 15 00:30:46.522659 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 15 00:30:46.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:46.524000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:46.534680 initrd-setup-root-after-ignition[1072]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 15 00:30:46.535754 initrd-setup-root-after-ignition[1072]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 15 00:30:46.537303 initrd-setup-root-after-ignition[1076]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 15 00:30:46.539660 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 15 00:30:46.540000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:46.540692 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 15 00:30:46.542854 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 15 00:30:46.602618 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 15 00:30:46.602789 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 15 00:30:46.603000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:46.603000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:46.604250 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 15 00:30:46.604990 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 15 00:30:46.606433 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 15 00:30:46.608114 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 15 00:30:46.639713 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 15 00:30:46.640000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:46.642152 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 15 00:30:46.675932 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 15 00:30:46.676229 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 15 00:30:46.677540 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 15 00:30:46.678581 systemd[1]: Stopped target timers.target - Timer Units. Jan 15 00:30:46.679500 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 15 00:30:46.680000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:46.679658 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 15 00:30:46.680898 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 15 00:30:46.681509 systemd[1]: Stopped target basic.target - Basic System. Jan 15 00:30:46.682540 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 15 00:30:46.683420 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 15 00:30:46.684361 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 15 00:30:46.685426 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 15 00:30:46.686443 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 15 00:30:46.687330 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 15 00:30:46.688438 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 15 00:30:46.689596 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 15 00:30:46.690659 systemd[1]: Stopped target swap.target - Swaps. Jan 15 00:30:46.696000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:46.696142 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 15 00:30:46.696307 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 15 00:30:46.697468 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 15 00:30:46.698616 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 15 00:30:46.699519 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 15 00:30:46.701000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:46.699659 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 15 00:30:46.700445 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 15 00:30:46.702000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:46.700610 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 15 00:30:46.703000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:46.701848 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 15 00:30:46.704000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:46.702035 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 15 00:30:46.703118 systemd[1]: ignition-files.service: Deactivated successfully. Jan 15 00:30:46.703299 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 15 00:30:46.703913 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 15 00:30:46.704079 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 15 00:30:46.710000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:46.705905 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 15 00:30:46.707763 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 15 00:30:46.707987 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 15 00:30:46.711560 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 15 00:30:46.716000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:46.714135 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 15 00:30:46.717000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:46.714415 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 15 00:30:46.718000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:46.716727 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 15 00:30:46.716899 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 15 00:30:46.717631 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 15 00:30:46.717817 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 15 00:30:46.727160 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 15 00:30:46.727281 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 15 00:30:46.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:46.729000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:46.747392 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 15 00:30:46.754553 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 15 00:30:46.754667 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 15 00:30:46.755000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:46.758840 ignition[1096]: INFO : Ignition 2.22.0 Jan 15 00:30:46.758840 ignition[1096]: INFO : Stage: umount Jan 15 00:30:46.760088 ignition[1096]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 15 00:30:46.760088 ignition[1096]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 15 00:30:46.761737 ignition[1096]: INFO : umount: umount passed Jan 15 00:30:46.762276 ignition[1096]: INFO : Ignition finished successfully Jan 15 00:30:46.763869 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 15 00:30:46.764008 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 15 00:30:46.764000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:46.765334 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 15 00:30:46.765000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:46.765394 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 15 00:30:46.766000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:46.766114 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 15 00:30:46.767000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:46.766227 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 15 00:30:46.766903 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 15 00:30:46.769000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:46.766954 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 15 00:30:46.767767 systemd[1]: Stopped target network.target - Network. Jan 15 00:30:46.768600 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 15 00:30:46.768661 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 15 00:30:46.769625 systemd[1]: Stopped target paths.target - Path Units. Jan 15 00:30:46.770373 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 15 00:30:46.774301 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 15 00:30:46.775505 systemd[1]: Stopped target slices.target - Slice Units. Jan 15 00:30:46.775989 systemd[1]: Stopped target sockets.target - Socket Units. Jan 15 00:30:46.776962 systemd[1]: iscsid.socket: Deactivated successfully. Jan 15 00:30:46.777018 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 15 00:30:46.777757 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 15 00:30:46.777797 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 15 00:30:46.780000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:46.778627 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 15 00:30:46.781000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:46.778655 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 15 00:30:46.781000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:46.779564 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 15 00:30:46.779652 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 15 00:30:46.780404 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 15 00:30:46.780455 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 15 00:30:46.781321 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 15 00:30:46.781384 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 15 00:30:46.782323 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 15 00:30:46.783369 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 15 00:30:46.793949 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 15 00:30:46.794000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:46.794098 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 15 00:30:46.797000 audit: BPF prog-id=6 op=UNLOAD Jan 15 00:30:46.801241 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 15 00:30:46.801441 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 15 00:30:46.802000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:46.804000 audit: BPF prog-id=9 op=UNLOAD Jan 15 00:30:46.805696 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 15 00:30:46.806423 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 15 00:30:46.806474 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 15 00:30:46.810387 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 15 00:30:46.811011 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 15 00:30:46.813000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:46.811112 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 15 00:30:46.814000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:46.813700 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 15 00:30:46.813794 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 15 00:30:46.814811 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 15 00:30:46.816000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:46.814891 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 15 00:30:46.817117 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 15 00:30:46.829677 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 15 00:30:46.829910 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 15 00:30:46.831000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:46.832318 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 15 00:30:46.833169 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 15 00:30:46.834000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:46.835000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:46.833679 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 15 00:30:46.843000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:46.833718 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 15 00:30:46.834348 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 15 00:30:46.834405 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 15 00:30:46.835025 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 15 00:30:46.835080 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 15 00:30:46.836367 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 15 00:30:46.836421 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 15 00:30:46.849374 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 15 00:30:46.853000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:46.850042 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 15 00:30:46.854000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:46.850159 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 15 00:30:46.857000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:46.853436 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 15 00:30:46.853529 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 15 00:30:46.855366 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 15 00:30:46.855463 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 00:30:46.878078 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 15 00:30:46.879315 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 15 00:30:46.880000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:46.880000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:46.881419 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 15 00:30:46.882319 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 15 00:30:46.883000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:46.884687 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 15 00:30:46.887600 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 15 00:30:46.910522 systemd[1]: Switching root. Jan 15 00:30:46.965372 systemd-journald[298]: Journal stopped Jan 15 00:30:48.528862 systemd-journald[298]: Received SIGTERM from PID 1 (systemd). Jan 15 00:30:48.528993 kernel: SELinux: policy capability network_peer_controls=1 Jan 15 00:30:48.529012 kernel: SELinux: policy capability open_perms=1 Jan 15 00:30:48.529028 kernel: SELinux: policy capability extended_socket_class=1 Jan 15 00:30:48.529043 kernel: SELinux: policy capability always_check_network=0 Jan 15 00:30:48.529056 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 15 00:30:48.529072 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 15 00:30:48.529094 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 15 00:30:48.529131 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 15 00:30:48.529148 kernel: SELinux: policy capability userspace_initial_context=0 Jan 15 00:30:48.529168 systemd[1]: Successfully loaded SELinux policy in 87.754ms. Jan 15 00:30:48.529292 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.910ms. Jan 15 00:30:48.529312 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 15 00:30:48.529327 systemd[1]: Detected virtualization kvm. Jan 15 00:30:48.529340 systemd[1]: Detected architecture x86-64. Jan 15 00:30:48.529364 systemd[1]: Detected first boot. Jan 15 00:30:48.529378 systemd[1]: Hostname set to . Jan 15 00:30:48.529393 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 15 00:30:48.529407 kernel: kauditd_printk_skb: 51 callbacks suppressed Jan 15 00:30:48.529421 kernel: audit: type=1334 audit(1768437047.238:88): prog-id=10 op=LOAD Jan 15 00:30:48.529435 kernel: audit: type=1334 audit(1768437047.238:89): prog-id=10 op=UNLOAD Jan 15 00:30:48.529447 kernel: audit: type=1334 audit(1768437047.238:90): prog-id=11 op=LOAD Jan 15 00:30:48.529464 kernel: audit: type=1334 audit(1768437047.238:91): prog-id=11 op=UNLOAD Jan 15 00:30:48.529477 zram_generator::config[1141]: No configuration found. Jan 15 00:30:48.529495 kernel: Guest personality initialized and is inactive Jan 15 00:30:48.529508 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 15 00:30:48.529520 kernel: Initialized host personality Jan 15 00:30:48.529534 kernel: NET: Registered PF_VSOCK protocol family Jan 15 00:30:48.529546 systemd[1]: Populated /etc with preset unit settings. Jan 15 00:30:48.529565 kernel: audit: type=1334 audit(1768437047.983:92): prog-id=12 op=LOAD Jan 15 00:30:48.529583 kernel: audit: type=1334 audit(1768437047.983:93): prog-id=3 op=UNLOAD Jan 15 00:30:48.529600 kernel: audit: type=1334 audit(1768437047.983:94): prog-id=13 op=LOAD Jan 15 00:30:48.529616 kernel: audit: type=1334 audit(1768437047.983:95): prog-id=14 op=LOAD Jan 15 00:30:48.529633 kernel: audit: type=1334 audit(1768437047.983:96): prog-id=4 op=UNLOAD Jan 15 00:30:48.529650 kernel: audit: type=1334 audit(1768437047.983:97): prog-id=5 op=UNLOAD Jan 15 00:30:48.529669 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 15 00:30:48.529696 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 15 00:30:48.529714 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 15 00:30:48.529738 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 15 00:30:48.529753 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 15 00:30:48.529766 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 15 00:30:48.529787 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 15 00:30:48.529801 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 15 00:30:48.529816 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 15 00:30:48.529828 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 15 00:30:48.529842 systemd[1]: Created slice user.slice - User and Session Slice. Jan 15 00:30:48.529855 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 15 00:30:48.529878 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 15 00:30:48.529891 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 15 00:30:48.529906 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 15 00:30:48.529920 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 15 00:30:48.529934 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 15 00:30:48.529947 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 15 00:30:48.529960 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 15 00:30:48.529979 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 15 00:30:48.529993 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 15 00:30:48.530006 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 15 00:30:48.530019 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 15 00:30:48.530033 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 15 00:30:48.530046 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 15 00:30:48.530059 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 15 00:30:48.530073 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 15 00:30:48.530093 systemd[1]: Reached target slices.target - Slice Units. Jan 15 00:30:48.530108 systemd[1]: Reached target swap.target - Swaps. Jan 15 00:30:48.530121 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 15 00:30:48.530134 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 15 00:30:48.530148 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 15 00:30:48.530163 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 15 00:30:48.533324 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 15 00:30:48.533383 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 15 00:30:48.533398 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 15 00:30:48.533412 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 15 00:30:48.533428 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 15 00:30:48.533442 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 15 00:30:48.533461 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 15 00:30:48.533474 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 15 00:30:48.533494 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 15 00:30:48.533507 systemd[1]: Mounting media.mount - External Media Directory... Jan 15 00:30:48.533522 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 15 00:30:48.533536 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 15 00:30:48.533550 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 15 00:30:48.533565 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 15 00:30:48.533580 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 15 00:30:48.533599 systemd[1]: Reached target machines.target - Containers. Jan 15 00:30:48.533613 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 15 00:30:48.533626 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 15 00:30:48.533639 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 15 00:30:48.533653 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 15 00:30:48.533666 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 15 00:30:48.533685 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 15 00:30:48.533701 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 15 00:30:48.533715 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 15 00:30:48.533728 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 15 00:30:48.533742 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 15 00:30:48.533755 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 15 00:30:48.533769 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 15 00:30:48.533794 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 15 00:30:48.533814 systemd[1]: Stopped systemd-fsck-usr.service. Jan 15 00:30:48.533833 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 15 00:30:48.533846 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 15 00:30:48.533860 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 15 00:30:48.533874 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 15 00:30:48.533887 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 15 00:30:48.533901 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 15 00:30:48.533919 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 15 00:30:48.533933 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 15 00:30:48.533947 kernel: fuse: init (API version 7.41) Jan 15 00:30:48.533964 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 15 00:30:48.533982 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 15 00:30:48.534009 systemd[1]: Mounted media.mount - External Media Directory. Jan 15 00:30:48.534028 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 15 00:30:48.534054 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 15 00:30:48.534073 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 15 00:30:48.534090 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 15 00:30:48.534109 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 15 00:30:48.534127 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 15 00:30:48.534148 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 15 00:30:48.534163 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 15 00:30:48.536263 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 15 00:30:48.536326 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 15 00:30:48.536341 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 15 00:30:48.536356 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 15 00:30:48.536370 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 15 00:30:48.536390 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 15 00:30:48.536404 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 15 00:30:48.536418 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 15 00:30:48.536432 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 15 00:30:48.536446 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 15 00:30:48.536469 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 15 00:30:48.536491 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 15 00:30:48.536521 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 15 00:30:48.536541 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 15 00:30:48.536561 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 15 00:30:48.536581 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 15 00:30:48.536596 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 15 00:30:48.536609 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 15 00:30:48.536630 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 15 00:30:48.536645 kernel: ACPI: bus type drm_connector registered Jan 15 00:30:48.536659 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 15 00:30:48.536673 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 15 00:30:48.536687 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 15 00:30:48.536701 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 15 00:30:48.536715 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 15 00:30:48.536734 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 15 00:30:48.536748 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 15 00:30:48.536778 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 15 00:30:48.536798 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 15 00:30:48.536871 systemd-journald[1216]: Collecting audit messages is enabled. Jan 15 00:30:48.536899 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 15 00:30:48.536924 systemd-journald[1216]: Journal started Jan 15 00:30:48.536958 systemd-journald[1216]: Runtime Journal (/run/log/journal/ee86592197724cb091658a9f2a3ced19) is 4.8M, max 39.1M, 34.2M free. Jan 15 00:30:48.085000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 15 00:30:48.543794 systemd[1]: Started systemd-journald.service - Journal Service. Jan 15 00:30:48.543866 kernel: loop1: detected capacity change from 0 to 111544 Jan 15 00:30:48.240000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:48.250000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:48.255000 audit: BPF prog-id=14 op=UNLOAD Jan 15 00:30:48.255000 audit: BPF prog-id=13 op=UNLOAD Jan 15 00:30:48.261000 audit: BPF prog-id=15 op=LOAD Jan 15 00:30:48.262000 audit: BPF prog-id=16 op=LOAD Jan 15 00:30:48.262000 audit: BPF prog-id=17 op=LOAD Jan 15 00:30:48.345000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:48.350000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:48.350000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:48.355000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:48.355000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:48.360000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:48.360000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:48.364000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:48.366000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:48.375000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:48.375000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:48.379000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:48.460000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:48.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:48.467000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:48.467000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:48.512000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 15 00:30:48.512000 audit[1216]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7fff43d97cd0 a2=4000 a3=0 items=0 ppid=1 pid=1216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:30:48.512000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 15 00:30:48.525000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:48.542000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:48.543000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:47.969046 systemd[1]: Queued start job for default target multi-user.target. Jan 15 00:30:47.984897 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 15 00:30:48.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:47.986008 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 15 00:30:48.542853 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 15 00:30:48.543758 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 15 00:30:48.550467 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 15 00:30:48.553474 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 15 00:30:48.554538 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 15 00:30:48.613604 systemd-journald[1216]: Time spent on flushing to /var/log/journal/ee86592197724cb091658a9f2a3ced19 is 47.509ms for 1150 entries. Jan 15 00:30:48.613604 systemd-journald[1216]: System Journal (/var/log/journal/ee86592197724cb091658a9f2a3ced19) is 8M, max 163.5M, 155.5M free. Jan 15 00:30:48.689542 kernel: loop2: detected capacity change from 0 to 119256 Jan 15 00:30:48.689629 systemd-journald[1216]: Received client request to flush runtime journal. Jan 15 00:30:48.689741 kernel: loop3: detected capacity change from 0 to 224512 Jan 15 00:30:48.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:48.626000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:48.692000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:48.622817 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 15 00:30:48.625285 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 15 00:30:48.633095 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 15 00:30:48.690897 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 15 00:30:48.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:48.707459 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 15 00:30:48.726598 kernel: loop4: detected capacity change from 0 to 8 Jan 15 00:30:48.742239 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 15 00:30:48.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:48.745000 audit: BPF prog-id=18 op=LOAD Jan 15 00:30:48.746000 audit: BPF prog-id=19 op=LOAD Jan 15 00:30:48.746000 audit: BPF prog-id=20 op=LOAD Jan 15 00:30:48.750435 kernel: loop5: detected capacity change from 0 to 111544 Jan 15 00:30:48.750608 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 15 00:30:48.753000 audit: BPF prog-id=21 op=LOAD Jan 15 00:30:48.759442 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 15 00:30:48.763903 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 15 00:30:48.785231 kernel: loop6: detected capacity change from 0 to 119256 Jan 15 00:30:48.787000 audit: BPF prog-id=22 op=LOAD Jan 15 00:30:48.787000 audit: BPF prog-id=23 op=LOAD Jan 15 00:30:48.787000 audit: BPF prog-id=24 op=LOAD Jan 15 00:30:48.789898 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 15 00:30:48.793000 audit: BPF prog-id=25 op=LOAD Jan 15 00:30:48.793000 audit: BPF prog-id=26 op=LOAD Jan 15 00:30:48.793000 audit: BPF prog-id=27 op=LOAD Jan 15 00:30:48.800033 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 15 00:30:48.807221 kernel: loop7: detected capacity change from 0 to 224512 Jan 15 00:30:48.832287 kernel: loop1: detected capacity change from 0 to 8 Jan 15 00:30:48.835080 (sd-merge)[1289]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-digitalocean.raw'. Jan 15 00:30:48.849358 (sd-merge)[1289]: Merged extensions into '/usr'. Jan 15 00:30:48.851452 systemd-tmpfiles[1292]: ACLs are not supported, ignoring. Jan 15 00:30:48.852422 systemd-tmpfiles[1292]: ACLs are not supported, ignoring. Jan 15 00:30:48.864456 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 15 00:30:48.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:48.868899 systemd[1]: Reload requested from client PID 1240 ('systemd-sysext') (unit systemd-sysext.service)... Jan 15 00:30:48.868942 systemd[1]: Reloading... Jan 15 00:30:49.068296 systemd-nsresourced[1293]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 15 00:30:49.102287 zram_generator::config[1338]: No configuration found. Jan 15 00:30:49.384772 systemd-oomd[1290]: No swap; memory pressure usage will be degraded Jan 15 00:30:49.486471 systemd-resolved[1291]: Positive Trust Anchors: Jan 15 00:30:49.486496 systemd-resolved[1291]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 15 00:30:49.486504 systemd-resolved[1291]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 15 00:30:49.486572 systemd-resolved[1291]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 15 00:30:49.532901 systemd-resolved[1291]: Using system hostname 'ci-4515.1.0-n-4ecc98c3fd'. Jan 15 00:30:49.625232 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 15 00:30:49.625534 systemd[1]: Reloading finished in 754 ms. Jan 15 00:30:49.645598 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 15 00:30:49.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:49.646844 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 15 00:30:49.647000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:49.648060 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 15 00:30:49.648000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:49.649049 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 15 00:30:49.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:49.650000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:49.650349 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 15 00:30:49.656896 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 15 00:30:49.665800 systemd[1]: Starting ensure-sysext.service... Jan 15 00:30:49.673550 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 15 00:30:49.686000 audit: BPF prog-id=28 op=LOAD Jan 15 00:30:49.686000 audit: BPF prog-id=22 op=UNLOAD Jan 15 00:30:49.686000 audit: BPF prog-id=29 op=LOAD Jan 15 00:30:49.686000 audit: BPF prog-id=30 op=LOAD Jan 15 00:30:49.686000 audit: BPF prog-id=23 op=UNLOAD Jan 15 00:30:49.686000 audit: BPF prog-id=24 op=UNLOAD Jan 15 00:30:49.693000 audit: BPF prog-id=31 op=LOAD Jan 15 00:30:49.693000 audit: BPF prog-id=18 op=UNLOAD Jan 15 00:30:49.693000 audit: BPF prog-id=32 op=LOAD Jan 15 00:30:49.693000 audit: BPF prog-id=33 op=LOAD Jan 15 00:30:49.693000 audit: BPF prog-id=19 op=UNLOAD Jan 15 00:30:49.693000 audit: BPF prog-id=20 op=UNLOAD Jan 15 00:30:49.695000 audit: BPF prog-id=34 op=LOAD Jan 15 00:30:49.695000 audit: BPF prog-id=25 op=UNLOAD Jan 15 00:30:49.695000 audit: BPF prog-id=35 op=LOAD Jan 15 00:30:49.695000 audit: BPF prog-id=36 op=LOAD Jan 15 00:30:49.695000 audit: BPF prog-id=26 op=UNLOAD Jan 15 00:30:49.695000 audit: BPF prog-id=27 op=UNLOAD Jan 15 00:30:49.699000 audit: BPF prog-id=37 op=LOAD Jan 15 00:30:49.699000 audit: BPF prog-id=15 op=UNLOAD Jan 15 00:30:49.699000 audit: BPF prog-id=38 op=LOAD Jan 15 00:30:49.699000 audit: BPF prog-id=39 op=LOAD Jan 15 00:30:49.699000 audit: BPF prog-id=16 op=UNLOAD Jan 15 00:30:49.699000 audit: BPF prog-id=17 op=UNLOAD Jan 15 00:30:49.700000 audit: BPF prog-id=40 op=LOAD Jan 15 00:30:49.700000 audit: BPF prog-id=21 op=UNLOAD Jan 15 00:30:49.718462 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 15 00:30:49.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:49.721000 audit: BPF prog-id=8 op=UNLOAD Jan 15 00:30:49.721000 audit: BPF prog-id=7 op=UNLOAD Jan 15 00:30:49.722000 audit: BPF prog-id=41 op=LOAD Jan 15 00:30:49.722000 audit: BPF prog-id=42 op=LOAD Jan 15 00:30:49.725010 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 15 00:30:49.731791 systemd[1]: Reload requested from client PID 1380 ('systemctl') (unit ensure-sysext.service)... Jan 15 00:30:49.731823 systemd[1]: Reloading... Jan 15 00:30:49.769044 systemd-udevd[1383]: Using default interface naming scheme 'v257'. Jan 15 00:30:49.775701 systemd-tmpfiles[1381]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 15 00:30:49.775740 systemd-tmpfiles[1381]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 15 00:30:49.776123 systemd-tmpfiles[1381]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 15 00:30:49.777789 systemd-tmpfiles[1381]: ACLs are not supported, ignoring. Jan 15 00:30:49.777854 systemd-tmpfiles[1381]: ACLs are not supported, ignoring. Jan 15 00:30:49.799678 systemd-tmpfiles[1381]: Detected autofs mount point /boot during canonicalization of boot. Jan 15 00:30:49.799701 systemd-tmpfiles[1381]: Skipping /boot Jan 15 00:30:49.847222 zram_generator::config[1426]: No configuration found. Jan 15 00:30:49.858827 systemd-tmpfiles[1381]: Detected autofs mount point /boot during canonicalization of boot. Jan 15 00:30:49.858849 systemd-tmpfiles[1381]: Skipping /boot Jan 15 00:30:50.063206 kernel: mousedev: PS/2 mouse device common for all mice Jan 15 00:30:50.097205 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 15 00:30:50.103281 kernel: ACPI: button: Power Button [PWRF] Jan 15 00:30:50.157226 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jan 15 00:30:50.195214 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 15 00:30:50.373783 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 15 00:30:50.373846 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 15 00:30:50.375825 systemd[1]: Reloading finished in 643 ms. Jan 15 00:30:50.389539 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 15 00:30:50.389000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:50.393000 audit: BPF prog-id=43 op=LOAD Jan 15 00:30:50.393000 audit: BPF prog-id=37 op=UNLOAD Jan 15 00:30:50.393000 audit: BPF prog-id=44 op=LOAD Jan 15 00:30:50.393000 audit: BPF prog-id=45 op=LOAD Jan 15 00:30:50.393000 audit: BPF prog-id=38 op=UNLOAD Jan 15 00:30:50.393000 audit: BPF prog-id=39 op=UNLOAD Jan 15 00:30:50.394000 audit: BPF prog-id=46 op=LOAD Jan 15 00:30:50.394000 audit: BPF prog-id=47 op=LOAD Jan 15 00:30:50.394000 audit: BPF prog-id=41 op=UNLOAD Jan 15 00:30:50.394000 audit: BPF prog-id=42 op=UNLOAD Jan 15 00:30:50.395000 audit: BPF prog-id=48 op=LOAD Jan 15 00:30:50.395000 audit: BPF prog-id=28 op=UNLOAD Jan 15 00:30:50.395000 audit: BPF prog-id=49 op=LOAD Jan 15 00:30:50.395000 audit: BPF prog-id=50 op=LOAD Jan 15 00:30:50.395000 audit: BPF prog-id=29 op=UNLOAD Jan 15 00:30:50.395000 audit: BPF prog-id=30 op=UNLOAD Jan 15 00:30:50.397000 audit: BPF prog-id=51 op=LOAD Jan 15 00:30:50.397000 audit: BPF prog-id=40 op=UNLOAD Jan 15 00:30:50.397000 audit: BPF prog-id=52 op=LOAD Jan 15 00:30:50.397000 audit: BPF prog-id=34 op=UNLOAD Jan 15 00:30:50.398000 audit: BPF prog-id=53 op=LOAD Jan 15 00:30:50.398000 audit: BPF prog-id=54 op=LOAD Jan 15 00:30:50.398000 audit: BPF prog-id=35 op=UNLOAD Jan 15 00:30:50.398000 audit: BPF prog-id=36 op=UNLOAD Jan 15 00:30:50.398000 audit: BPF prog-id=55 op=LOAD Jan 15 00:30:50.398000 audit: BPF prog-id=31 op=UNLOAD Jan 15 00:30:50.398000 audit: BPF prog-id=56 op=LOAD Jan 15 00:30:50.398000 audit: BPF prog-id=57 op=LOAD Jan 15 00:30:50.398000 audit: BPF prog-id=32 op=UNLOAD Jan 15 00:30:50.398000 audit: BPF prog-id=33 op=UNLOAD Jan 15 00:30:50.402742 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 15 00:30:50.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:50.484000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:50.483850 systemd[1]: Finished ensure-sysext.service. Jan 15 00:30:50.497370 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jan 15 00:30:50.497459 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jan 15 00:30:50.497335 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Jan 15 00:30:50.498508 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 15 00:30:50.507622 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 15 00:30:50.513050 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 15 00:30:50.518041 kernel: Console: switching to colour dummy device 80x25 Jan 15 00:30:50.518135 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 15 00:30:50.518158 kernel: [drm] features: -context_init Jan 15 00:30:50.519323 kernel: [drm] number of scanouts: 1 Jan 15 00:30:50.518661 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 15 00:30:50.522223 kernel: [drm] number of cap sets: 0 Jan 15 00:30:50.525316 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0 Jan 15 00:30:50.526036 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 15 00:30:50.530226 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 15 00:30:50.533041 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 15 00:30:50.537052 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 15 00:30:50.537450 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 15 00:30:50.537628 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 15 00:30:50.541070 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 15 00:30:50.548268 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 15 00:30:50.549277 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 15 00:30:50.563000 audit: BPF prog-id=58 op=LOAD Jan 15 00:30:50.559485 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 15 00:30:50.570471 kernel: ISO 9660 Extensions: RRIP_1991A Jan 15 00:30:50.569000 audit: BPF prog-id=59 op=LOAD Jan 15 00:30:50.568067 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 15 00:30:50.571510 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 15 00:30:50.576771 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 15 00:30:50.582519 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 00:30:50.584528 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 15 00:30:50.588047 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Jan 15 00:30:50.615215 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 15 00:30:50.615320 kernel: Console: switching to colour frame buffer device 128x48 Jan 15 00:30:50.624159 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 15 00:30:50.676316 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 15 00:30:50.729920 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 15 00:30:50.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:50.740000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:50.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:50.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:50.757000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:50.751647 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 15 00:30:50.755058 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 15 00:30:50.756463 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 15 00:30:50.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:50.762990 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 15 00:30:50.781602 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 15 00:30:50.785431 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 15 00:30:50.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:50.788000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:50.786733 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 00:30:50.796281 systemd-networkd[1518]: lo: Link UP Jan 15 00:30:50.796291 systemd-networkd[1518]: lo: Gained carrier Jan 15 00:30:50.800040 systemd-networkd[1518]: eth1: Configuring with /run/systemd/network/10-8a:a7:d0:58:1d:9d.network. Jan 15 00:30:50.800933 systemd-networkd[1518]: eth0: Configuring with /run/systemd/network/10-e6:1a:c0:fb:95:8a.network. Jan 15 00:30:50.801375 systemd-networkd[1518]: eth1: Link UP Jan 15 00:30:50.801442 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 15 00:30:50.801544 systemd-networkd[1518]: eth1: Gained carrier Jan 15 00:30:50.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:50.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:50.805000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:50.803874 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 15 00:30:50.805307 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 15 00:30:50.808188 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 15 00:30:50.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:50.811000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:50.808673 systemd-networkd[1518]: eth0: Link UP Jan 15 00:30:50.809413 systemd-networkd[1518]: eth0: Gained carrier Jan 15 00:30:50.809519 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 15 00:30:50.814869 systemd[1]: Reached target network.target - Network. Jan 15 00:30:50.823439 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 15 00:30:50.827504 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 15 00:30:50.830334 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 15 00:30:50.830897 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 15 00:30:50.835000 audit[1520]: SYSTEM_BOOT pid=1520 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 15 00:30:50.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:50.840609 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 00:30:50.843264 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 15 00:30:50.858146 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 15 00:30:50.859000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:50.887000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 15 00:30:50.887000 audit[1556]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc0454e7a0 a2=420 a3=0 items=0 ppid=1501 pid=1556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:30:50.887000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 15 00:30:50.888324 augenrules[1556]: No rules Jan 15 00:30:50.891956 systemd[1]: audit-rules.service: Deactivated successfully. Jan 15 00:30:50.892271 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 15 00:30:50.938080 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 15 00:30:51.015616 kernel: EDAC MC: Ver: 3.0.0 Jan 15 00:30:51.061972 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 15 00:30:51.062252 systemd[1]: Reached target time-set.target - System Time Set. Jan 15 00:30:51.080242 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 00:30:52.220300 systemd-resolved[1291]: Clock change detected. Flushing caches. Jan 15 00:30:52.220349 systemd-timesyncd[1519]: Contacted time server 172.104.209.204:123 (0.flatcar.pool.ntp.org). Jan 15 00:30:52.220550 systemd-timesyncd[1519]: Initial clock synchronization to Thu 2026-01-15 00:30:52.220061 UTC. Jan 15 00:30:52.387115 ldconfig[1509]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 15 00:30:52.390959 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 15 00:30:52.393547 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 15 00:30:52.416603 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 15 00:30:52.417998 systemd[1]: Reached target sysinit.target - System Initialization. Jan 15 00:30:52.420686 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 15 00:30:52.421528 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 15 00:30:52.423226 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 15 00:30:52.424131 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 15 00:30:52.424729 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 15 00:30:52.425182 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 15 00:30:52.425777 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 15 00:30:52.426222 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 15 00:30:52.426625 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 15 00:30:52.426652 systemd[1]: Reached target paths.target - Path Units. Jan 15 00:30:52.427073 systemd[1]: Reached target timers.target - Timer Units. Jan 15 00:30:52.440745 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 15 00:30:52.443290 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 15 00:30:52.449509 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 15 00:30:52.453183 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 15 00:30:52.455395 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 15 00:30:52.467110 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 15 00:30:52.468171 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 15 00:30:52.469920 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 15 00:30:52.473987 systemd[1]: Reached target sockets.target - Socket Units. Jan 15 00:30:52.475189 systemd[1]: Reached target basic.target - Basic System. Jan 15 00:30:52.475676 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 15 00:30:52.475710 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 15 00:30:52.477096 systemd[1]: Starting containerd.service - containerd container runtime... Jan 15 00:30:52.483215 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 15 00:30:52.491311 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 15 00:30:52.495203 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 15 00:30:52.499981 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 15 00:30:52.509369 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 15 00:30:52.510321 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 15 00:30:52.516530 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 15 00:30:52.526371 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 15 00:30:52.530851 coreos-metadata[1577]: Jan 15 00:30:52.530 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 15 00:30:52.537412 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 15 00:30:52.542377 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 15 00:30:52.548120 coreos-metadata[1577]: Jan 15 00:30:52.545 INFO Fetch successful Jan 15 00:30:52.550471 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 15 00:30:52.569487 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 15 00:30:52.573670 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 15 00:30:52.574754 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 15 00:30:52.576827 systemd[1]: Starting update-engine.service - Update Engine... Jan 15 00:30:52.583669 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 15 00:30:52.599867 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 15 00:30:52.606169 jq[1582]: false Jan 15 00:30:52.611245 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 15 00:30:52.613219 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 15 00:30:52.613755 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 15 00:30:52.617859 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 15 00:30:52.637104 jq[1595]: true Jan 15 00:30:52.645485 google_oslogin_nss_cache[1584]: oslogin_cache_refresh[1584]: Refreshing passwd entry cache Jan 15 00:30:52.645810 extend-filesystems[1583]: Found /dev/vda6 Jan 15 00:30:52.639393 oslogin_cache_refresh[1584]: Refreshing passwd entry cache Jan 15 00:30:52.664586 systemd[1]: motdgen.service: Deactivated successfully. Jan 15 00:30:52.665272 google_oslogin_nss_cache[1584]: oslogin_cache_refresh[1584]: Failure getting users, quitting Jan 15 00:30:52.665272 google_oslogin_nss_cache[1584]: oslogin_cache_refresh[1584]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 15 00:30:52.665272 google_oslogin_nss_cache[1584]: oslogin_cache_refresh[1584]: Refreshing group entry cache Jan 15 00:30:52.659382 oslogin_cache_refresh[1584]: Failure getting users, quitting Jan 15 00:30:52.665070 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 15 00:30:52.659410 oslogin_cache_refresh[1584]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 15 00:30:52.659484 oslogin_cache_refresh[1584]: Refreshing group entry cache Jan 15 00:30:52.670061 extend-filesystems[1583]: Found /dev/vda9 Jan 15 00:30:52.671911 google_oslogin_nss_cache[1584]: oslogin_cache_refresh[1584]: Failure getting groups, quitting Jan 15 00:30:52.671911 google_oslogin_nss_cache[1584]: oslogin_cache_refresh[1584]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 15 00:30:52.671169 oslogin_cache_refresh[1584]: Failure getting groups, quitting Jan 15 00:30:52.671189 oslogin_cache_refresh[1584]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 15 00:30:52.674131 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 15 00:30:52.675167 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 15 00:30:52.688729 extend-filesystems[1583]: Checking size of /dev/vda9 Jan 15 00:30:52.704576 update_engine[1593]: I20260115 00:30:52.701987 1593 main.cc:92] Flatcar Update Engine starting Jan 15 00:30:52.710898 tar[1605]: linux-amd64/LICENSE Jan 15 00:30:52.713898 tar[1605]: linux-amd64/helm Jan 15 00:30:52.739670 dbus-daemon[1578]: [system] SELinux support is enabled Jan 15 00:30:52.740101 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 15 00:30:52.745657 extend-filesystems[1583]: Resized partition /dev/vda9 Jan 15 00:30:52.746900 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 15 00:30:52.746946 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 15 00:30:52.749445 jq[1614]: true Jan 15 00:30:52.753827 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 15 00:30:52.757413 extend-filesystems[1635]: resize2fs 1.47.3 (8-Jul-2025) Jan 15 00:30:52.753975 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Jan 15 00:30:52.754000 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 15 00:30:52.774753 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 14138363 blocks Jan 15 00:30:52.771641 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 15 00:30:52.777326 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 15 00:30:52.782468 systemd[1]: Started update-engine.service - Update Engine. Jan 15 00:30:52.785793 update_engine[1593]: I20260115 00:30:52.785737 1593 update_check_scheduler.cc:74] Next update check in 9m33s Jan 15 00:30:52.786106 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 15 00:30:52.868604 systemd-logind[1592]: New seat seat0. Jan 15 00:30:52.889531 systemd-logind[1592]: Watching system buttons on /dev/input/event2 (Power Button) Jan 15 00:30:52.889574 systemd-logind[1592]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 15 00:30:52.890212 systemd[1]: Started systemd-logind.service - User Login Management. Jan 15 00:30:52.982070 kernel: EXT4-fs (vda9): resized filesystem to 14138363 Jan 15 00:30:53.007156 bash[1656]: Updated "/home/core/.ssh/authorized_keys" Jan 15 00:30:53.003185 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 15 00:30:53.008294 extend-filesystems[1635]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 15 00:30:53.008294 extend-filesystems[1635]: old_desc_blocks = 1, new_desc_blocks = 7 Jan 15 00:30:53.008294 extend-filesystems[1635]: The filesystem on /dev/vda9 is now 14138363 (4k) blocks long. Jan 15 00:30:53.028224 extend-filesystems[1583]: Resized filesystem in /dev/vda9 Jan 15 00:30:53.010576 systemd[1]: Starting sshkeys.service... Jan 15 00:30:53.016229 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 15 00:30:53.017554 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 15 00:30:53.057452 systemd-networkd[1518]: eth0: Gained IPv6LL Jan 15 00:30:53.064418 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 15 00:30:53.070543 systemd[1]: Reached target network-online.target - Network is Online. Jan 15 00:30:53.077282 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 00:30:53.084739 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 15 00:30:53.134884 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 15 00:30:53.140529 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 15 00:30:53.249160 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 15 00:30:53.331831 coreos-metadata[1670]: Jan 15 00:30:53.331 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 15 00:30:53.353513 coreos-metadata[1670]: Jan 15 00:30:53.350 INFO Fetch successful Jan 15 00:30:53.364618 sshd_keygen[1637]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 15 00:30:53.371007 unknown[1670]: wrote ssh authorized keys file for user: core Jan 15 00:30:53.411722 update-ssh-keys[1690]: Updated "/home/core/.ssh/authorized_keys" Jan 15 00:30:53.412993 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 15 00:30:53.415407 systemd[1]: Finished sshkeys.service. Jan 15 00:30:53.433733 locksmithd[1640]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 15 00:30:53.452336 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 15 00:30:53.463121 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 15 00:30:53.470323 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 15 00:30:53.473621 systemd[1]: Started sshd@0-164.92.64.55:22-20.161.92.111:46944.service - OpenSSH per-connection server daemon (20.161.92.111:46944). Jan 15 00:30:53.493048 containerd[1616]: time="2026-01-15T00:30:53Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 15 00:30:53.502185 containerd[1616]: time="2026-01-15T00:30:53.501167905Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 15 00:30:53.528706 systemd[1]: issuegen.service: Deactivated successfully. Jan 15 00:30:53.529106 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 15 00:30:53.542204 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 15 00:30:53.569546 systemd-networkd[1518]: eth1: Gained IPv6LL Jan 15 00:30:53.594811 containerd[1616]: time="2026-01-15T00:30:53.593295072Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="16.008µs" Jan 15 00:30:53.594811 containerd[1616]: time="2026-01-15T00:30:53.593354862Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 15 00:30:53.594811 containerd[1616]: time="2026-01-15T00:30:53.593424098Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 15 00:30:53.594811 containerd[1616]: time="2026-01-15T00:30:53.593444653Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 15 00:30:53.594811 containerd[1616]: time="2026-01-15T00:30:53.593732558Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 15 00:30:53.594811 containerd[1616]: time="2026-01-15T00:30:53.593768528Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 15 00:30:53.594811 containerd[1616]: time="2026-01-15T00:30:53.593867922Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 15 00:30:53.594811 containerd[1616]: time="2026-01-15T00:30:53.593885830Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 15 00:30:53.594811 containerd[1616]: time="2026-01-15T00:30:53.594256220Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 15 00:30:53.594811 containerd[1616]: time="2026-01-15T00:30:53.594282475Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 15 00:30:53.594811 containerd[1616]: time="2026-01-15T00:30:53.594300872Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 15 00:30:53.594811 containerd[1616]: time="2026-01-15T00:30:53.594312306Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 15 00:30:53.595371 containerd[1616]: time="2026-01-15T00:30:53.594555189Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 15 00:30:53.595371 containerd[1616]: time="2026-01-15T00:30:53.594574518Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 15 00:30:53.595371 containerd[1616]: time="2026-01-15T00:30:53.594728208Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 15 00:30:53.596764 containerd[1616]: time="2026-01-15T00:30:53.596707814Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 15 00:30:53.596901 containerd[1616]: time="2026-01-15T00:30:53.596793729Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 15 00:30:53.596901 containerd[1616]: time="2026-01-15T00:30:53.596813333Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 15 00:30:53.596901 containerd[1616]: time="2026-01-15T00:30:53.596865280Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 15 00:30:53.598998 containerd[1616]: time="2026-01-15T00:30:53.598645527Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 15 00:30:53.598998 containerd[1616]: time="2026-01-15T00:30:53.598797772Z" level=info msg="metadata content store policy set" policy=shared Jan 15 00:30:53.617490 containerd[1616]: time="2026-01-15T00:30:53.616106869Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 15 00:30:53.617490 containerd[1616]: time="2026-01-15T00:30:53.616256977Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 15 00:30:53.617490 containerd[1616]: time="2026-01-15T00:30:53.616397792Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 15 00:30:53.617490 containerd[1616]: time="2026-01-15T00:30:53.616419913Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 15 00:30:53.617490 containerd[1616]: time="2026-01-15T00:30:53.616443568Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 15 00:30:53.617490 containerd[1616]: time="2026-01-15T00:30:53.616462960Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 15 00:30:53.617490 containerd[1616]: time="2026-01-15T00:30:53.616482569Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 15 00:30:53.617490 containerd[1616]: time="2026-01-15T00:30:53.616499338Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 15 00:30:53.617490 containerd[1616]: time="2026-01-15T00:30:53.616516706Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 15 00:30:53.617490 containerd[1616]: time="2026-01-15T00:30:53.616558657Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 15 00:30:53.617490 containerd[1616]: time="2026-01-15T00:30:53.616579724Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 15 00:30:53.617490 containerd[1616]: time="2026-01-15T00:30:53.616596863Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 15 00:30:53.617490 containerd[1616]: time="2026-01-15T00:30:53.616611776Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 15 00:30:53.617490 containerd[1616]: time="2026-01-15T00:30:53.616629995Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 15 00:30:53.621835 containerd[1616]: time="2026-01-15T00:30:53.616828884Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 15 00:30:53.621835 containerd[1616]: time="2026-01-15T00:30:53.616861133Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 15 00:30:53.621835 containerd[1616]: time="2026-01-15T00:30:53.616883572Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 15 00:30:53.621835 containerd[1616]: time="2026-01-15T00:30:53.616899678Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 15 00:30:53.621835 containerd[1616]: time="2026-01-15T00:30:53.616930075Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 15 00:30:53.621835 containerd[1616]: time="2026-01-15T00:30:53.616946265Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 15 00:30:53.621835 containerd[1616]: time="2026-01-15T00:30:53.616982492Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 15 00:30:53.621835 containerd[1616]: time="2026-01-15T00:30:53.617004532Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 15 00:30:53.621835 containerd[1616]: time="2026-01-15T00:30:53.621209693Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 15 00:30:53.621835 containerd[1616]: time="2026-01-15T00:30:53.621249739Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 15 00:30:53.617997 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 15 00:30:53.629395 containerd[1616]: time="2026-01-15T00:30:53.623313267Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 15 00:30:53.629395 containerd[1616]: time="2026-01-15T00:30:53.623415832Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 15 00:30:53.629395 containerd[1616]: time="2026-01-15T00:30:53.623503080Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 15 00:30:53.629395 containerd[1616]: time="2026-01-15T00:30:53.623524528Z" level=info msg="Start snapshots syncer" Jan 15 00:30:53.629395 containerd[1616]: time="2026-01-15T00:30:53.623600283Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 15 00:30:53.624790 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 15 00:30:53.629687 containerd[1616]: time="2026-01-15T00:30:53.623980503Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 15 00:30:53.629687 containerd[1616]: time="2026-01-15T00:30:53.624095379Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 15 00:30:53.629939 containerd[1616]: time="2026-01-15T00:30:53.624172437Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 15 00:30:53.629939 containerd[1616]: time="2026-01-15T00:30:53.624377298Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 15 00:30:53.629939 containerd[1616]: time="2026-01-15T00:30:53.624413893Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 15 00:30:53.629939 containerd[1616]: time="2026-01-15T00:30:53.624430366Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 15 00:30:53.629939 containerd[1616]: time="2026-01-15T00:30:53.624447893Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 15 00:30:53.629939 containerd[1616]: time="2026-01-15T00:30:53.624467365Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 15 00:30:53.629939 containerd[1616]: time="2026-01-15T00:30:53.624483709Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 15 00:30:53.629939 containerd[1616]: time="2026-01-15T00:30:53.624501200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 15 00:30:53.629939 containerd[1616]: time="2026-01-15T00:30:53.624518204Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 15 00:30:53.629939 containerd[1616]: time="2026-01-15T00:30:53.624543579Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 15 00:30:53.629939 containerd[1616]: time="2026-01-15T00:30:53.627881712Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 15 00:30:53.629939 containerd[1616]: time="2026-01-15T00:30:53.627935111Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 15 00:30:53.629939 containerd[1616]: time="2026-01-15T00:30:53.627950035Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 15 00:30:53.639724 containerd[1616]: time="2026-01-15T00:30:53.627963844Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 15 00:30:53.639724 containerd[1616]: time="2026-01-15T00:30:53.627979088Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 15 00:30:53.639724 containerd[1616]: time="2026-01-15T00:30:53.628002116Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 15 00:30:53.639724 containerd[1616]: time="2026-01-15T00:30:53.628041313Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 15 00:30:53.639724 containerd[1616]: time="2026-01-15T00:30:53.628068577Z" level=info msg="runtime interface created" Jan 15 00:30:53.639724 containerd[1616]: time="2026-01-15T00:30:53.628077110Z" level=info msg="created NRI interface" Jan 15 00:30:53.639724 containerd[1616]: time="2026-01-15T00:30:53.628091066Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 15 00:30:53.639724 containerd[1616]: time="2026-01-15T00:30:53.628126242Z" level=info msg="Connect containerd service" Jan 15 00:30:53.639724 containerd[1616]: time="2026-01-15T00:30:53.628180049Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 15 00:30:53.639724 containerd[1616]: time="2026-01-15T00:30:53.631632722Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 15 00:30:53.630680 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 15 00:30:53.633897 systemd[1]: Reached target getty.target - Login Prompts. Jan 15 00:30:53.914970 containerd[1616]: time="2026-01-15T00:30:53.913717530Z" level=info msg="Start subscribing containerd event" Jan 15 00:30:53.914970 containerd[1616]: time="2026-01-15T00:30:53.913791015Z" level=info msg="Start recovering state" Jan 15 00:30:53.914970 containerd[1616]: time="2026-01-15T00:30:53.913939154Z" level=info msg="Start event monitor" Jan 15 00:30:53.914970 containerd[1616]: time="2026-01-15T00:30:53.913958038Z" level=info msg="Start cni network conf syncer for default" Jan 15 00:30:53.914970 containerd[1616]: time="2026-01-15T00:30:53.913967560Z" level=info msg="Start streaming server" Jan 15 00:30:53.914970 containerd[1616]: time="2026-01-15T00:30:53.913979914Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 15 00:30:53.914970 containerd[1616]: time="2026-01-15T00:30:53.913991164Z" level=info msg="runtime interface starting up..." Jan 15 00:30:53.914970 containerd[1616]: time="2026-01-15T00:30:53.914003563Z" level=info msg="starting plugins..." Jan 15 00:30:53.914970 containerd[1616]: time="2026-01-15T00:30:53.914046182Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 15 00:30:53.915265 containerd[1616]: time="2026-01-15T00:30:53.915223844Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 15 00:30:53.918463 containerd[1616]: time="2026-01-15T00:30:53.915332790Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 15 00:30:53.918463 containerd[1616]: time="2026-01-15T00:30:53.915412745Z" level=info msg="containerd successfully booted in 0.425715s" Jan 15 00:30:53.915620 systemd[1]: Started containerd.service - containerd container runtime. Jan 15 00:30:53.952119 sshd[1699]: Accepted publickey for core from 20.161.92.111 port 46944 ssh2: RSA SHA256:EI6zOIQQ/KS+Bep1WC3vAdrkfex1g89wTRiiOfwntxI Jan 15 00:30:53.954127 sshd-session[1699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:30:53.971843 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 15 00:30:53.977831 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 15 00:30:53.994445 systemd-logind[1592]: New session 1 of user core. Jan 15 00:30:54.031387 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 15 00:30:54.047645 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 15 00:30:54.086795 (systemd)[1727]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 15 00:30:54.094678 systemd-logind[1592]: New session c1 of user core. Jan 15 00:30:54.123469 tar[1605]: linux-amd64/README.md Jan 15 00:30:54.158311 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 15 00:30:54.358136 systemd[1727]: Queued start job for default target default.target. Jan 15 00:30:54.368605 systemd[1727]: Created slice app.slice - User Application Slice. Jan 15 00:30:54.368670 systemd[1727]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 15 00:30:54.368693 systemd[1727]: Reached target paths.target - Paths. Jan 15 00:30:54.369416 systemd[1727]: Reached target timers.target - Timers. Jan 15 00:30:54.372349 systemd[1727]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 15 00:30:54.373929 systemd[1727]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 15 00:30:54.413225 systemd[1727]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 15 00:30:54.413513 systemd[1727]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 15 00:30:54.413704 systemd[1727]: Reached target sockets.target - Sockets. Jan 15 00:30:54.413756 systemd[1727]: Reached target basic.target - Basic System. Jan 15 00:30:54.413796 systemd[1727]: Reached target default.target - Main User Target. Jan 15 00:30:54.413830 systemd[1727]: Startup finished in 292ms. Jan 15 00:30:54.414490 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 15 00:30:54.422288 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 15 00:30:54.624865 systemd[1]: Started sshd@1-164.92.64.55:22-20.161.92.111:46950.service - OpenSSH per-connection server daemon (20.161.92.111:46950). Jan 15 00:30:54.870406 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 00:30:54.875750 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 15 00:30:54.877415 systemd[1]: Startup finished in 3.124s (kernel) + 7.502s (initrd) + 6.873s (userspace) = 17.499s. Jan 15 00:30:54.881692 (kubelet)[1751]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 00:30:54.995882 sshd[1743]: Accepted publickey for core from 20.161.92.111 port 46950 ssh2: RSA SHA256:EI6zOIQQ/KS+Bep1WC3vAdrkfex1g89wTRiiOfwntxI Jan 15 00:30:54.996939 sshd-session[1743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:30:55.009384 systemd-logind[1592]: New session 2 of user core. Jan 15 00:30:55.016351 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 15 00:30:55.188516 sshd[1758]: Connection closed by 20.161.92.111 port 46950 Jan 15 00:30:55.188274 sshd-session[1743]: pam_unix(sshd:session): session closed for user core Jan 15 00:30:55.196362 systemd[1]: sshd@1-164.92.64.55:22-20.161.92.111:46950.service: Deactivated successfully. Jan 15 00:30:55.199715 systemd[1]: session-2.scope: Deactivated successfully. Jan 15 00:30:55.201734 systemd-logind[1592]: Session 2 logged out. Waiting for processes to exit. Jan 15 00:30:55.204269 systemd-logind[1592]: Removed session 2. Jan 15 00:30:55.263475 systemd[1]: Started sshd@2-164.92.64.55:22-20.161.92.111:46964.service - OpenSSH per-connection server daemon (20.161.92.111:46964). Jan 15 00:30:55.552224 kubelet[1751]: E0115 00:30:55.552094 1751 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 00:30:55.555950 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 00:30:55.556183 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 00:30:55.557277 systemd[1]: kubelet.service: Consumed 1.276s CPU time, 266M memory peak. Jan 15 00:30:55.625339 sshd[1766]: Accepted publickey for core from 20.161.92.111 port 46964 ssh2: RSA SHA256:EI6zOIQQ/KS+Bep1WC3vAdrkfex1g89wTRiiOfwntxI Jan 15 00:30:55.626839 sshd-session[1766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:30:55.632847 systemd-logind[1592]: New session 3 of user core. Jan 15 00:30:55.639349 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 15 00:30:55.807744 sshd[1772]: Connection closed by 20.161.92.111 port 46964 Jan 15 00:30:55.808603 sshd-session[1766]: pam_unix(sshd:session): session closed for user core Jan 15 00:30:55.814398 systemd-logind[1592]: Session 3 logged out. Waiting for processes to exit. Jan 15 00:30:55.815370 systemd[1]: sshd@2-164.92.64.55:22-20.161.92.111:46964.service: Deactivated successfully. Jan 15 00:30:55.817398 systemd[1]: session-3.scope: Deactivated successfully. Jan 15 00:30:55.819401 systemd-logind[1592]: Removed session 3. Jan 15 00:30:55.885772 systemd[1]: Started sshd@3-164.92.64.55:22-20.161.92.111:46970.service - OpenSSH per-connection server daemon (20.161.92.111:46970). Jan 15 00:30:56.261879 sshd[1778]: Accepted publickey for core from 20.161.92.111 port 46970 ssh2: RSA SHA256:EI6zOIQQ/KS+Bep1WC3vAdrkfex1g89wTRiiOfwntxI Jan 15 00:30:56.263140 sshd-session[1778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:30:56.271094 systemd-logind[1592]: New session 4 of user core. Jan 15 00:30:56.276482 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 15 00:30:56.460236 sshd[1781]: Connection closed by 20.161.92.111 port 46970 Jan 15 00:30:56.460938 sshd-session[1778]: pam_unix(sshd:session): session closed for user core Jan 15 00:30:56.466235 systemd[1]: sshd@3-164.92.64.55:22-20.161.92.111:46970.service: Deactivated successfully. Jan 15 00:30:56.468401 systemd[1]: session-4.scope: Deactivated successfully. Jan 15 00:30:56.469350 systemd-logind[1592]: Session 4 logged out. Waiting for processes to exit. Jan 15 00:30:56.471473 systemd-logind[1592]: Removed session 4. Jan 15 00:30:56.532237 systemd[1]: Started sshd@4-164.92.64.55:22-20.161.92.111:46974.service - OpenSSH per-connection server daemon (20.161.92.111:46974). Jan 15 00:30:56.883994 sshd[1787]: Accepted publickey for core from 20.161.92.111 port 46974 ssh2: RSA SHA256:EI6zOIQQ/KS+Bep1WC3vAdrkfex1g89wTRiiOfwntxI Jan 15 00:30:56.885660 sshd-session[1787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:30:56.891707 systemd-logind[1592]: New session 5 of user core. Jan 15 00:30:56.903401 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 15 00:30:57.025574 sudo[1791]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 15 00:30:57.025981 sudo[1791]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 15 00:30:57.039364 sudo[1791]: pam_unix(sudo:session): session closed for user root Jan 15 00:30:57.100272 sshd[1790]: Connection closed by 20.161.92.111 port 46974 Jan 15 00:30:57.101128 sshd-session[1787]: pam_unix(sshd:session): session closed for user core Jan 15 00:30:57.105572 systemd[1]: sshd@4-164.92.64.55:22-20.161.92.111:46974.service: Deactivated successfully. Jan 15 00:30:57.107658 systemd[1]: session-5.scope: Deactivated successfully. Jan 15 00:30:57.110354 systemd-logind[1592]: Session 5 logged out. Waiting for processes to exit. Jan 15 00:30:57.111462 systemd-logind[1592]: Removed session 5. Jan 15 00:30:57.174434 systemd[1]: Started sshd@5-164.92.64.55:22-20.161.92.111:46990.service - OpenSSH per-connection server daemon (20.161.92.111:46990). Jan 15 00:30:57.545936 sshd[1797]: Accepted publickey for core from 20.161.92.111 port 46990 ssh2: RSA SHA256:EI6zOIQQ/KS+Bep1WC3vAdrkfex1g89wTRiiOfwntxI Jan 15 00:30:57.547484 sshd-session[1797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:30:57.555603 systemd-logind[1592]: New session 6 of user core. Jan 15 00:30:57.565429 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 15 00:30:57.686507 sudo[1802]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 15 00:30:57.686853 sudo[1802]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 15 00:30:57.694328 sudo[1802]: pam_unix(sudo:session): session closed for user root Jan 15 00:30:57.705200 sudo[1801]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 15 00:30:57.705666 sudo[1801]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 15 00:30:57.723179 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 15 00:30:57.775000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 15 00:30:57.777231 kernel: kauditd_printk_skb: 141 callbacks suppressed Jan 15 00:30:57.777313 kernel: audit: type=1305 audit(1768437057.775:235): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 15 00:30:57.775000 audit[1824]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffeb1143d90 a2=420 a3=0 items=0 ppid=1805 pid=1824 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:30:57.782420 kernel: audit: type=1300 audit(1768437057.775:235): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffeb1143d90 a2=420 a3=0 items=0 ppid=1805 pid=1824 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:30:57.782554 kernel: audit: type=1327 audit(1768437057.775:235): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 15 00:30:57.775000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 15 00:30:57.784779 augenrules[1824]: No rules Jan 15 00:30:57.787062 systemd[1]: audit-rules.service: Deactivated successfully. Jan 15 00:30:57.787415 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 15 00:30:57.786000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:57.789308 sudo[1801]: pam_unix(sudo:session): session closed for user root Jan 15 00:30:57.792171 kernel: audit: type=1130 audit(1768437057.786:236): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:57.792237 kernel: audit: type=1131 audit(1768437057.786:237): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:57.786000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:57.788000 audit[1801]: USER_END pid=1801 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 15 00:30:57.797411 kernel: audit: type=1106 audit(1768437057.788:238): pid=1801 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 15 00:30:57.788000 audit[1801]: CRED_DISP pid=1801 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 15 00:30:57.800155 kernel: audit: type=1104 audit(1768437057.788:239): pid=1801 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 15 00:30:57.857289 sshd[1800]: Connection closed by 20.161.92.111 port 46990 Jan 15 00:30:57.856591 sshd-session[1797]: pam_unix(sshd:session): session closed for user core Jan 15 00:30:57.861000 audit[1797]: USER_END pid=1797 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:30:57.866682 systemd[1]: sshd@5-164.92.64.55:22-20.161.92.111:46990.service: Deactivated successfully. Jan 15 00:30:57.870209 systemd[1]: session-6.scope: Deactivated successfully. Jan 15 00:30:57.861000 audit[1797]: CRED_DISP pid=1797 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:30:57.892583 kernel: audit: type=1106 audit(1768437057.861:240): pid=1797 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:30:57.892701 kernel: audit: type=1104 audit(1768437057.861:241): pid=1797 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:30:57.892561 systemd-logind[1592]: Session 6 logged out. Waiting for processes to exit. Jan 15 00:30:57.893206 kernel: audit: type=1131 audit(1768437057.861:242): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-164.92.64.55:22-20.161.92.111:46990 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:57.861000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-164.92.64.55:22-20.161.92.111:46990 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:57.898265 systemd-logind[1592]: Removed session 6. Jan 15 00:30:57.931255 systemd[1]: Started sshd@6-164.92.64.55:22-20.161.92.111:47006.service - OpenSSH per-connection server daemon (20.161.92.111:47006). Jan 15 00:30:57.930000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-164.92.64.55:22-20.161.92.111:47006 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:30:58.298000 audit[1833]: USER_ACCT pid=1833 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:30:58.299307 sshd[1833]: Accepted publickey for core from 20.161.92.111 port 47006 ssh2: RSA SHA256:EI6zOIQQ/KS+Bep1WC3vAdrkfex1g89wTRiiOfwntxI Jan 15 00:30:58.299000 audit[1833]: CRED_ACQ pid=1833 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:30:58.299000 audit[1833]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd3cd7da10 a2=3 a3=0 items=0 ppid=1 pid=1833 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:30:58.299000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:30:58.301073 sshd-session[1833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:30:58.306865 systemd-logind[1592]: New session 7 of user core. Jan 15 00:30:58.319434 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 15 00:30:58.322000 audit[1833]: USER_START pid=1833 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:30:58.326000 audit[1836]: CRED_ACQ pid=1836 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:30:58.436776 sudo[1837]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 15 00:30:58.435000 audit[1837]: USER_ACCT pid=1837 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 15 00:30:58.435000 audit[1837]: CRED_REFR pid=1837 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 15 00:30:58.437176 sudo[1837]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 15 00:30:58.438000 audit[1837]: USER_START pid=1837 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 15 00:30:58.968175 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 15 00:30:58.991612 (dockerd)[1855]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 15 00:30:59.406439 dockerd[1855]: time="2026-01-15T00:30:59.405882518Z" level=info msg="Starting up" Jan 15 00:30:59.407945 dockerd[1855]: time="2026-01-15T00:30:59.407901567Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 15 00:30:59.430289 dockerd[1855]: time="2026-01-15T00:30:59.430224552Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 15 00:30:59.457583 systemd[1]: var-lib-docker-metacopy\x2dcheck1267391922-merged.mount: Deactivated successfully. Jan 15 00:30:59.475086 dockerd[1855]: time="2026-01-15T00:30:59.475035427Z" level=info msg="Loading containers: start." Jan 15 00:30:59.489079 kernel: Initializing XFRM netlink socket Jan 15 00:30:59.574000 audit[1905]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1905 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:30:59.574000 audit[1905]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffd89fe90e0 a2=0 a3=0 items=0 ppid=1855 pid=1905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:30:59.574000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 15 00:30:59.578000 audit[1907]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1907 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:30:59.578000 audit[1907]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffe0e213960 a2=0 a3=0 items=0 ppid=1855 pid=1907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:30:59.578000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 15 00:30:59.583000 audit[1909]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1909 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:30:59.583000 audit[1909]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe3b6f0e90 a2=0 a3=0 items=0 ppid=1855 pid=1909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:30:59.583000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 15 00:30:59.586000 audit[1911]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1911 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:30:59.586000 audit[1911]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdbb5681a0 a2=0 a3=0 items=0 ppid=1855 pid=1911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:30:59.586000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 15 00:30:59.590000 audit[1913]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1913 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:30:59.590000 audit[1913]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc8d2792e0 a2=0 a3=0 items=0 ppid=1855 pid=1913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:30:59.590000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 15 00:30:59.593000 audit[1915]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1915 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:30:59.593000 audit[1915]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff335c58f0 a2=0 a3=0 items=0 ppid=1855 pid=1915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:30:59.593000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 15 00:30:59.597000 audit[1917]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1917 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:30:59.597000 audit[1917]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffddfd5d900 a2=0 a3=0 items=0 ppid=1855 pid=1917 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:30:59.597000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 15 00:30:59.601000 audit[1919]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1919 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:30:59.601000 audit[1919]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffe032a17a0 a2=0 a3=0 items=0 ppid=1855 pid=1919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:30:59.601000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 15 00:30:59.643000 audit[1922]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1922 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:30:59.643000 audit[1922]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7ffcebb37fb0 a2=0 a3=0 items=0 ppid=1855 pid=1922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:30:59.643000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 15 00:30:59.647000 audit[1924]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1924 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:30:59.647000 audit[1924]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffdf46b5aa0 a2=0 a3=0 items=0 ppid=1855 pid=1924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:30:59.647000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 15 00:30:59.651000 audit[1926]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1926 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:30:59.651000 audit[1926]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffd52973770 a2=0 a3=0 items=0 ppid=1855 pid=1926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:30:59.651000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 15 00:30:59.655000 audit[1928]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1928 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:30:59.655000 audit[1928]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffc3b24f820 a2=0 a3=0 items=0 ppid=1855 pid=1928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:30:59.655000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 15 00:30:59.658000 audit[1930]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1930 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:30:59.658000 audit[1930]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffc9bbd8cf0 a2=0 a3=0 items=0 ppid=1855 pid=1930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:30:59.658000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 15 00:30:59.717000 audit[1960]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=1960 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:30:59.717000 audit[1960]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffc9920ca70 a2=0 a3=0 items=0 ppid=1855 pid=1960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:30:59.717000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 15 00:30:59.720000 audit[1962]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=1962 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:30:59.720000 audit[1962]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffee0e29450 a2=0 a3=0 items=0 ppid=1855 pid=1962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:30:59.720000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 15 00:30:59.723000 audit[1964]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=1964 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:30:59.723000 audit[1964]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffda872f980 a2=0 a3=0 items=0 ppid=1855 pid=1964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:30:59.723000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 15 00:30:59.726000 audit[1966]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=1966 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:30:59.726000 audit[1966]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc7a57d510 a2=0 a3=0 items=0 ppid=1855 pid=1966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:30:59.726000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 15 00:30:59.730000 audit[1968]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=1968 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:30:59.730000 audit[1968]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff23d079c0 a2=0 a3=0 items=0 ppid=1855 pid=1968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:30:59.730000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 15 00:30:59.733000 audit[1970]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=1970 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:30:59.733000 audit[1970]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff63a68ce0 a2=0 a3=0 items=0 ppid=1855 pid=1970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:30:59.733000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 15 00:30:59.736000 audit[1972]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=1972 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:30:59.736000 audit[1972]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe10c7ce70 a2=0 a3=0 items=0 ppid=1855 pid=1972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:30:59.736000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 15 00:30:59.739000 audit[1974]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=1974 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:30:59.739000 audit[1974]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7fff9df2c0f0 a2=0 a3=0 items=0 ppid=1855 pid=1974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:30:59.739000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 15 00:30:59.742000 audit[1976]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=1976 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:30:59.742000 audit[1976]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffe207cd5d0 a2=0 a3=0 items=0 ppid=1855 pid=1976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:30:59.742000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 15 00:30:59.745000 audit[1978]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=1978 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:30:59.745000 audit[1978]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffe8e448cb0 a2=0 a3=0 items=0 ppid=1855 pid=1978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:30:59.745000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 15 00:30:59.748000 audit[1980]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=1980 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:30:59.748000 audit[1980]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffc61936060 a2=0 a3=0 items=0 ppid=1855 pid=1980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:30:59.748000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 15 00:30:59.751000 audit[1982]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=1982 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:30:59.751000 audit[1982]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffc55c55910 a2=0 a3=0 items=0 ppid=1855 pid=1982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:30:59.751000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 15 00:30:59.754000 audit[1984]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=1984 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:30:59.754000 audit[1984]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffcd21a3350 a2=0 a3=0 items=0 ppid=1855 pid=1984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:30:59.754000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 15 00:30:59.761000 audit[1989]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=1989 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:30:59.761000 audit[1989]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe866abf20 a2=0 a3=0 items=0 ppid=1855 pid=1989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:30:59.761000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 15 00:30:59.764000 audit[1991]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=1991 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:30:59.764000 audit[1991]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7fff65895e60 a2=0 a3=0 items=0 ppid=1855 pid=1991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:30:59.764000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 15 00:30:59.767000 audit[1993]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1993 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:30:59.767000 audit[1993]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fffabba6e90 a2=0 a3=0 items=0 ppid=1855 pid=1993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:30:59.767000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 15 00:30:59.770000 audit[1995]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=1995 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:30:59.770000 audit[1995]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffa5d041d0 a2=0 a3=0 items=0 ppid=1855 pid=1995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:30:59.770000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 15 00:30:59.774000 audit[1997]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=1997 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:30:59.774000 audit[1997]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffeabd78590 a2=0 a3=0 items=0 ppid=1855 pid=1997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:30:59.774000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 15 00:30:59.777000 audit[1999]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=1999 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:30:59.777000 audit[1999]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffc9c36d860 a2=0 a3=0 items=0 ppid=1855 pid=1999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:30:59.777000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 15 00:30:59.805000 audit[2004]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2004 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:30:59.805000 audit[2004]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffd6bdb8410 a2=0 a3=0 items=0 ppid=1855 pid=2004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:30:59.805000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 15 00:30:59.811000 audit[2006]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2006 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:30:59.811000 audit[2006]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffcd11f01f0 a2=0 a3=0 items=0 ppid=1855 pid=2006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:30:59.811000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 15 00:30:59.829000 audit[2014]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2014 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:30:59.829000 audit[2014]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffcf4acce50 a2=0 a3=0 items=0 ppid=1855 pid=2014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:30:59.829000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 15 00:30:59.843000 audit[2020]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2020 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:30:59.843000 audit[2020]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffe094c1b00 a2=0 a3=0 items=0 ppid=1855 pid=2020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:30:59.843000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 15 00:30:59.847000 audit[2022]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2022 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:30:59.847000 audit[2022]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7fff1cbdb180 a2=0 a3=0 items=0 ppid=1855 pid=2022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:30:59.847000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 15 00:30:59.851000 audit[2024]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2024 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:30:59.851000 audit[2024]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd6c5200a0 a2=0 a3=0 items=0 ppid=1855 pid=2024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:30:59.851000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 15 00:30:59.854000 audit[2026]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2026 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:30:59.854000 audit[2026]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffdd4621ad0 a2=0 a3=0 items=0 ppid=1855 pid=2026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:30:59.854000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 15 00:30:59.857000 audit[2028]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2028 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:30:59.857000 audit[2028]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff8873d630 a2=0 a3=0 items=0 ppid=1855 pid=2028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:30:59.857000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 15 00:30:59.859350 systemd-networkd[1518]: docker0: Link UP Jan 15 00:30:59.862833 dockerd[1855]: time="2026-01-15T00:30:59.862770339Z" level=info msg="Loading containers: done." Jan 15 00:30:59.881668 dockerd[1855]: time="2026-01-15T00:30:59.881453907Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 15 00:30:59.881668 dockerd[1855]: time="2026-01-15T00:30:59.881557690Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 15 00:30:59.881878 dockerd[1855]: time="2026-01-15T00:30:59.881802459Z" level=info msg="Initializing buildkit" Jan 15 00:30:59.909379 dockerd[1855]: time="2026-01-15T00:30:59.909194860Z" level=info msg="Completed buildkit initialization" Jan 15 00:30:59.922208 dockerd[1855]: time="2026-01-15T00:30:59.922092584Z" level=info msg="Daemon has completed initialization" Jan 15 00:30:59.924083 dockerd[1855]: time="2026-01-15T00:30:59.922436509Z" level=info msg="API listen on /run/docker.sock" Jan 15 00:30:59.923570 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 15 00:30:59.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:31:00.945463 containerd[1616]: time="2026-01-15T00:31:00.945406628Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 15 00:31:02.123566 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3021858181.mount: Deactivated successfully. Jan 15 00:31:03.628147 containerd[1616]: time="2026-01-15T00:31:03.628061962Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:31:03.630159 containerd[1616]: time="2026-01-15T00:31:03.630092275Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=27401903" Jan 15 00:31:03.630700 containerd[1616]: time="2026-01-15T00:31:03.630653731Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:31:03.634682 containerd[1616]: time="2026-01-15T00:31:03.634586750Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:31:03.637161 containerd[1616]: time="2026-01-15T00:31:03.637094688Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 2.691630547s" Jan 15 00:31:03.637161 containerd[1616]: time="2026-01-15T00:31:03.637159918Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 15 00:31:03.638258 containerd[1616]: time="2026-01-15T00:31:03.638189877Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 15 00:31:05.806922 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 15 00:31:05.812034 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 00:31:06.116544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 00:31:06.123144 kernel: kauditd_printk_skb: 132 callbacks suppressed Jan 15 00:31:06.123294 kernel: audit: type=1130 audit(1768437066.116:293): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:31:06.116000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:31:06.128515 (kubelet)[2145]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 00:31:06.221555 kubelet[2145]: E0115 00:31:06.221198 2145 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 00:31:06.231369 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 00:31:06.231892 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 00:31:06.233000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 15 00:31:06.234195 systemd[1]: kubelet.service: Consumed 263ms CPU time, 110.3M memory peak. Jan 15 00:31:06.239068 kernel: audit: type=1131 audit(1768437066.233:294): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 15 00:31:06.320285 containerd[1616]: time="2026-01-15T00:31:06.320201003Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:31:06.323062 containerd[1616]: time="2026-01-15T00:31:06.322971136Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24985199" Jan 15 00:31:06.323673 containerd[1616]: time="2026-01-15T00:31:06.323607976Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:31:06.329096 containerd[1616]: time="2026-01-15T00:31:06.329035190Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:31:06.330173 containerd[1616]: time="2026-01-15T00:31:06.329937605Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 2.691680327s" Jan 15 00:31:06.330173 containerd[1616]: time="2026-01-15T00:31:06.329994959Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 15 00:31:06.330979 containerd[1616]: time="2026-01-15T00:31:06.330816821Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 15 00:31:08.241050 containerd[1616]: time="2026-01-15T00:31:08.240973717Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:31:08.244625 containerd[1616]: time="2026-01-15T00:31:08.244569558Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:31:08.246201 containerd[1616]: time="2026-01-15T00:31:08.244807953Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19396939" Jan 15 00:31:08.247941 containerd[1616]: time="2026-01-15T00:31:08.247896268Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:31:08.249476 containerd[1616]: time="2026-01-15T00:31:08.249426637Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 1.91823075s" Jan 15 00:31:08.249585 containerd[1616]: time="2026-01-15T00:31:08.249478271Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 15 00:31:08.250650 containerd[1616]: time="2026-01-15T00:31:08.250192414Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 15 00:31:08.251807 systemd-resolved[1291]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Jan 15 00:31:09.645355 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3493143544.mount: Deactivated successfully. Jan 15 00:31:10.309758 containerd[1616]: time="2026-01-15T00:31:10.308649311Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:31:10.309758 containerd[1616]: time="2026-01-15T00:31:10.309559980Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=19572392" Jan 15 00:31:10.310422 containerd[1616]: time="2026-01-15T00:31:10.310345566Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:31:10.312199 containerd[1616]: time="2026-01-15T00:31:10.312152378Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:31:10.313267 containerd[1616]: time="2026-01-15T00:31:10.313222671Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 2.062991621s" Jan 15 00:31:10.313460 containerd[1616]: time="2026-01-15T00:31:10.313433175Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 15 00:31:10.314296 containerd[1616]: time="2026-01-15T00:31:10.314252777Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 15 00:31:11.054932 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3737396868.mount: Deactivated successfully. Jan 15 00:31:11.361252 systemd-resolved[1291]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Jan 15 00:31:12.016658 containerd[1616]: time="2026-01-15T00:31:12.016574686Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:31:12.018293 containerd[1616]: time="2026-01-15T00:31:12.018200068Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=0" Jan 15 00:31:12.019055 containerd[1616]: time="2026-01-15T00:31:12.018830015Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:31:12.027052 containerd[1616]: time="2026-01-15T00:31:12.025665380Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:31:12.027693 containerd[1616]: time="2026-01-15T00:31:12.027637272Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.713340444s" Jan 15 00:31:12.027795 containerd[1616]: time="2026-01-15T00:31:12.027695874Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 15 00:31:12.028528 containerd[1616]: time="2026-01-15T00:31:12.028490425Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 15 00:31:12.844216 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3869328348.mount: Deactivated successfully. Jan 15 00:31:12.848553 containerd[1616]: time="2026-01-15T00:31:12.848475953Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 15 00:31:12.850290 containerd[1616]: time="2026-01-15T00:31:12.849949507Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 15 00:31:12.851172 containerd[1616]: time="2026-01-15T00:31:12.851126227Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 15 00:31:12.854536 containerd[1616]: time="2026-01-15T00:31:12.854456889Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 15 00:31:12.855868 containerd[1616]: time="2026-01-15T00:31:12.855290329Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 826.749935ms" Jan 15 00:31:12.855868 containerd[1616]: time="2026-01-15T00:31:12.855338456Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 15 00:31:12.856097 containerd[1616]: time="2026-01-15T00:31:12.856064832Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 15 00:31:13.565744 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3132424441.mount: Deactivated successfully. Jan 15 00:31:16.242855 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 15 00:31:16.245691 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 00:31:16.458980 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 00:31:16.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:31:16.464050 kernel: audit: type=1130 audit(1768437076.459:295): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:31:16.474463 (kubelet)[2280]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 00:31:16.559225 kubelet[2280]: E0115 00:31:16.559070 2280 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 00:31:16.564816 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 00:31:16.565094 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 00:31:16.565000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 15 00:31:16.566876 systemd[1]: kubelet.service: Consumed 205ms CPU time, 110.3M memory peak. Jan 15 00:31:16.571088 kernel: audit: type=1131 audit(1768437076.565:296): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 15 00:31:16.811274 containerd[1616]: time="2026-01-15T00:31:16.811117223Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:31:16.813060 containerd[1616]: time="2026-01-15T00:31:16.812996758Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=55729101" Jan 15 00:31:16.813758 containerd[1616]: time="2026-01-15T00:31:16.813708648Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:31:16.815927 containerd[1616]: time="2026-01-15T00:31:16.815883488Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:31:16.817681 containerd[1616]: time="2026-01-15T00:31:16.817611418Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.961514913s" Jan 15 00:31:16.817681 containerd[1616]: time="2026-01-15T00:31:16.817672942Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 15 00:31:20.329443 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 00:31:20.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:31:20.330182 systemd[1]: kubelet.service: Consumed 205ms CPU time, 110.3M memory peak. Jan 15 00:31:20.337089 kernel: audit: type=1130 audit(1768437080.329:297): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:31:20.337213 kernel: audit: type=1131 audit(1768437080.329:298): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:31:20.329000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:31:20.334297 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 00:31:20.375738 systemd[1]: Reload requested from client PID 2316 ('systemctl') (unit session-7.scope)... Jan 15 00:31:20.375759 systemd[1]: Reloading... Jan 15 00:31:20.543545 zram_generator::config[2362]: No configuration found. Jan 15 00:31:20.826362 systemd[1]: Reloading finished in 450 ms. Jan 15 00:31:20.868245 kernel: audit: type=1334 audit(1768437080.865:299): prog-id=63 op=LOAD Jan 15 00:31:20.865000 audit: BPF prog-id=63 op=LOAD Jan 15 00:31:20.871380 kernel: audit: type=1334 audit(1768437080.865:300): prog-id=48 op=UNLOAD Jan 15 00:31:20.865000 audit: BPF prog-id=48 op=UNLOAD Jan 15 00:31:20.874042 kernel: audit: type=1334 audit(1768437080.865:301): prog-id=64 op=LOAD Jan 15 00:31:20.865000 audit: BPF prog-id=64 op=LOAD Jan 15 00:31:20.876032 kernel: audit: type=1334 audit(1768437080.865:302): prog-id=65 op=LOAD Jan 15 00:31:20.865000 audit: BPF prog-id=65 op=LOAD Jan 15 00:31:20.865000 audit: BPF prog-id=49 op=UNLOAD Jan 15 00:31:20.865000 audit: BPF prog-id=50 op=UNLOAD Jan 15 00:31:20.866000 audit: BPF prog-id=66 op=LOAD Jan 15 00:31:20.866000 audit: BPF prog-id=52 op=UNLOAD Jan 15 00:31:20.866000 audit: BPF prog-id=67 op=LOAD Jan 15 00:31:20.866000 audit: BPF prog-id=68 op=LOAD Jan 15 00:31:20.866000 audit: BPF prog-id=53 op=UNLOAD Jan 15 00:31:20.866000 audit: BPF prog-id=54 op=UNLOAD Jan 15 00:31:20.868000 audit: BPF prog-id=69 op=LOAD Jan 15 00:31:20.868000 audit: BPF prog-id=43 op=UNLOAD Jan 15 00:31:20.868000 audit: BPF prog-id=70 op=LOAD Jan 15 00:31:20.868000 audit: BPF prog-id=71 op=LOAD Jan 15 00:31:20.868000 audit: BPF prog-id=44 op=UNLOAD Jan 15 00:31:20.868000 audit: BPF prog-id=45 op=UNLOAD Jan 15 00:31:20.869000 audit: BPF prog-id=72 op=LOAD Jan 15 00:31:20.869000 audit: BPF prog-id=59 op=UNLOAD Jan 15 00:31:20.874000 audit: BPF prog-id=73 op=LOAD Jan 15 00:31:20.874000 audit: BPF prog-id=60 op=UNLOAD Jan 15 00:31:20.874000 audit: BPF prog-id=74 op=LOAD Jan 15 00:31:20.874000 audit: BPF prog-id=75 op=LOAD Jan 15 00:31:20.874000 audit: BPF prog-id=61 op=UNLOAD Jan 15 00:31:20.877054 kernel: audit: type=1334 audit(1768437080.865:303): prog-id=49 op=UNLOAD Jan 15 00:31:20.877086 kernel: audit: type=1334 audit(1768437080.865:304): prog-id=50 op=UNLOAD Jan 15 00:31:20.874000 audit: BPF prog-id=62 op=UNLOAD Jan 15 00:31:20.875000 audit: BPF prog-id=76 op=LOAD Jan 15 00:31:20.875000 audit: BPF prog-id=55 op=UNLOAD Jan 15 00:31:20.875000 audit: BPF prog-id=77 op=LOAD Jan 15 00:31:20.875000 audit: BPF prog-id=78 op=LOAD Jan 15 00:31:20.875000 audit: BPF prog-id=56 op=UNLOAD Jan 15 00:31:20.875000 audit: BPF prog-id=57 op=UNLOAD Jan 15 00:31:20.879000 audit: BPF prog-id=79 op=LOAD Jan 15 00:31:20.879000 audit: BPF prog-id=51 op=UNLOAD Jan 15 00:31:20.879000 audit: BPF prog-id=80 op=LOAD Jan 15 00:31:20.879000 audit: BPF prog-id=81 op=LOAD Jan 15 00:31:20.879000 audit: BPF prog-id=46 op=UNLOAD Jan 15 00:31:20.879000 audit: BPF prog-id=47 op=UNLOAD Jan 15 00:31:20.882000 audit: BPF prog-id=82 op=LOAD Jan 15 00:31:20.882000 audit: BPF prog-id=58 op=UNLOAD Jan 15 00:31:20.905459 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 15 00:31:20.905564 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 15 00:31:20.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 15 00:31:20.906040 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 00:31:20.906137 systemd[1]: kubelet.service: Consumed 142ms CPU time, 98.5M memory peak. Jan 15 00:31:20.908225 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 00:31:21.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:31:21.084410 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 00:31:21.098711 (kubelet)[2416]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 15 00:31:21.154500 kubelet[2416]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 15 00:31:21.154988 kubelet[2416]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 15 00:31:21.155075 kubelet[2416]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 15 00:31:21.155322 kubelet[2416]: I0115 00:31:21.155281 2416 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 15 00:31:21.745496 kubelet[2416]: I0115 00:31:21.745328 2416 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 15 00:31:21.745496 kubelet[2416]: I0115 00:31:21.745379 2416 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 15 00:31:21.746081 kubelet[2416]: I0115 00:31:21.746060 2416 server.go:954] "Client rotation is on, will bootstrap in background" Jan 15 00:31:21.782065 kubelet[2416]: E0115 00:31:21.780837 2416 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://164.92.64.55:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 164.92.64.55:6443: connect: connection refused" logger="UnhandledError" Jan 15 00:31:21.782627 kubelet[2416]: I0115 00:31:21.782599 2416 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 15 00:31:21.796247 kubelet[2416]: I0115 00:31:21.796211 2416 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 15 00:31:21.802286 kubelet[2416]: I0115 00:31:21.802241 2416 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 15 00:31:21.808247 kubelet[2416]: I0115 00:31:21.808148 2416 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 15 00:31:21.808433 kubelet[2416]: I0115 00:31:21.808230 2416 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4515.1.0-n-4ecc98c3fd","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 15 00:31:21.810295 kubelet[2416]: I0115 00:31:21.810221 2416 topology_manager.go:138] "Creating topology manager with none policy" Jan 15 00:31:21.810295 kubelet[2416]: I0115 00:31:21.810280 2416 container_manager_linux.go:304] "Creating device plugin manager" Jan 15 00:31:21.811726 kubelet[2416]: I0115 00:31:21.811668 2416 state_mem.go:36] "Initialized new in-memory state store" Jan 15 00:31:21.815378 kubelet[2416]: I0115 00:31:21.815330 2416 kubelet.go:446] "Attempting to sync node with API server" Jan 15 00:31:21.815378 kubelet[2416]: I0115 00:31:21.815376 2416 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 15 00:31:21.816901 kubelet[2416]: I0115 00:31:21.815407 2416 kubelet.go:352] "Adding apiserver pod source" Jan 15 00:31:21.816901 kubelet[2416]: I0115 00:31:21.815420 2416 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 15 00:31:21.825301 kubelet[2416]: W0115 00:31:21.825225 2416 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://164.92.64.55:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 164.92.64.55:6443: connect: connection refused Jan 15 00:31:21.825301 kubelet[2416]: E0115 00:31:21.825290 2416 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://164.92.64.55:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 164.92.64.55:6443: connect: connection refused" logger="UnhandledError" Jan 15 00:31:21.826163 kubelet[2416]: W0115 00:31:21.825390 2416 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://164.92.64.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4515.1.0-n-4ecc98c3fd&limit=500&resourceVersion=0": dial tcp 164.92.64.55:6443: connect: connection refused Jan 15 00:31:21.826163 kubelet[2416]: E0115 00:31:21.825431 2416 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://164.92.64.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4515.1.0-n-4ecc98c3fd&limit=500&resourceVersion=0\": dial tcp 164.92.64.55:6443: connect: connection refused" logger="UnhandledError" Jan 15 00:31:21.826163 kubelet[2416]: I0115 00:31:21.825963 2416 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 15 00:31:21.829878 kubelet[2416]: I0115 00:31:21.829670 2416 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 15 00:31:21.830375 kubelet[2416]: W0115 00:31:21.830353 2416 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 15 00:31:21.838052 kubelet[2416]: I0115 00:31:21.837272 2416 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 15 00:31:21.838052 kubelet[2416]: I0115 00:31:21.837342 2416 server.go:1287] "Started kubelet" Jan 15 00:31:21.839304 kubelet[2416]: I0115 00:31:21.839264 2416 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 15 00:31:21.840501 kubelet[2416]: I0115 00:31:21.840474 2416 server.go:479] "Adding debug handlers to kubelet server" Jan 15 00:31:21.844650 kubelet[2416]: I0115 00:31:21.843485 2416 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 15 00:31:21.844650 kubelet[2416]: I0115 00:31:21.843966 2416 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 15 00:31:21.849062 kubelet[2416]: E0115 00:31:21.845703 2416 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://164.92.64.55:6443/api/v1/namespaces/default/events\": dial tcp 164.92.64.55:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4515.1.0-n-4ecc98c3fd.188ac028068a60bf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4515.1.0-n-4ecc98c3fd,UID:ci-4515.1.0-n-4ecc98c3fd,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4515.1.0-n-4ecc98c3fd,},FirstTimestamp:2026-01-15 00:31:21.837297855 +0000 UTC m=+0.733018193,LastTimestamp:2026-01-15 00:31:21.837297855 +0000 UTC m=+0.733018193,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4515.1.0-n-4ecc98c3fd,}" Jan 15 00:31:21.853595 kubelet[2416]: I0115 00:31:21.853130 2416 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 15 00:31:21.856410 kubelet[2416]: I0115 00:31:21.856362 2416 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 15 00:31:21.864545 kernel: kauditd_printk_skb: 36 callbacks suppressed Jan 15 00:31:21.864713 kernel: audit: type=1325 audit(1768437081.860:341): table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2427 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:31:21.860000 audit[2427]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2427 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:31:21.864867 kubelet[2416]: E0115 00:31:21.863030 2416 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-n-4ecc98c3fd\" not found" Jan 15 00:31:21.864867 kubelet[2416]: I0115 00:31:21.863070 2416 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 15 00:31:21.864867 kubelet[2416]: I0115 00:31:21.863351 2416 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 15 00:31:21.864867 kubelet[2416]: I0115 00:31:21.863432 2416 reconciler.go:26] "Reconciler: start to sync state" Jan 15 00:31:21.864867 kubelet[2416]: W0115 00:31:21.863961 2416 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://164.92.64.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 164.92.64.55:6443: connect: connection refused Jan 15 00:31:21.864867 kubelet[2416]: E0115 00:31:21.864041 2416 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://164.92.64.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 164.92.64.55:6443: connect: connection refused" logger="UnhandledError" Jan 15 00:31:21.864867 kubelet[2416]: E0115 00:31:21.864359 2416 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://164.92.64.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515.1.0-n-4ecc98c3fd?timeout=10s\": dial tcp 164.92.64.55:6443: connect: connection refused" interval="200ms" Jan 15 00:31:21.865565 kubelet[2416]: I0115 00:31:21.865522 2416 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 15 00:31:21.860000 audit[2427]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff56d089e0 a2=0 a3=0 items=0 ppid=2416 pid=2427 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:21.870979 kubelet[2416]: E0115 00:31:21.870869 2416 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 15 00:31:21.871184 kernel: audit: type=1300 audit(1768437081.860:341): arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff56d089e0 a2=0 a3=0 items=0 ppid=2416 pid=2427 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:21.860000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 15 00:31:21.876127 kernel: audit: type=1327 audit(1768437081.860:341): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 15 00:31:21.876238 kernel: audit: type=1325 audit(1768437081.867:342): table=filter:43 family=2 entries=1 op=nft_register_chain pid=2428 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:31:21.867000 audit[2428]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2428 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:31:21.876339 kubelet[2416]: I0115 00:31:21.874228 2416 factory.go:221] Registration of the containerd container factory successfully Jan 15 00:31:21.876339 kubelet[2416]: I0115 00:31:21.874246 2416 factory.go:221] Registration of the systemd container factory successfully Jan 15 00:31:21.880653 kernel: audit: type=1300 audit(1768437081.867:342): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc49d86130 a2=0 a3=0 items=0 ppid=2416 pid=2428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:21.867000 audit[2428]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc49d86130 a2=0 a3=0 items=0 ppid=2416 pid=2428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:21.883162 kernel: audit: type=1327 audit(1768437081.867:342): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 15 00:31:21.867000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 15 00:31:21.885812 kernel: audit: type=1325 audit(1768437081.872:343): table=filter:44 family=2 entries=2 op=nft_register_chain pid=2430 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:31:21.872000 audit[2430]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2430 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:31:21.888060 kernel: audit: type=1300 audit(1768437081.872:343): arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffe7a3e4620 a2=0 a3=0 items=0 ppid=2416 pid=2430 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:21.872000 audit[2430]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffe7a3e4620 a2=0 a3=0 items=0 ppid=2416 pid=2430 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:21.891247 kernel: audit: type=1327 audit(1768437081.872:343): proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 15 00:31:21.892856 kernel: audit: type=1325 audit(1768437081.877:344): table=filter:45 family=2 entries=2 op=nft_register_chain pid=2432 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:31:21.872000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 15 00:31:21.877000 audit[2432]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2432 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:31:21.877000 audit[2432]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fffe1e95250 a2=0 a3=0 items=0 ppid=2416 pid=2432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:21.877000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 15 00:31:21.901229 kubelet[2416]: I0115 00:31:21.901197 2416 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 15 00:31:21.901229 kubelet[2416]: I0115 00:31:21.901216 2416 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 15 00:31:21.901229 kubelet[2416]: I0115 00:31:21.901233 2416 state_mem.go:36] "Initialized new in-memory state store" Jan 15 00:31:21.902763 kubelet[2416]: I0115 00:31:21.902731 2416 policy_none.go:49] "None policy: Start" Jan 15 00:31:21.902763 kubelet[2416]: I0115 00:31:21.902761 2416 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 15 00:31:21.902763 kubelet[2416]: I0115 00:31:21.902772 2416 state_mem.go:35] "Initializing new in-memory state store" Jan 15 00:31:21.905000 audit[2438]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2438 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:31:21.905000 audit[2438]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7fffc47c2e90 a2=0 a3=0 items=0 ppid=2416 pid=2438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:21.905000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 15 00:31:21.907712 kubelet[2416]: I0115 00:31:21.907389 2416 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 15 00:31:21.908000 audit[2441]: NETFILTER_CFG table=mangle:47 family=2 entries=1 op=nft_register_chain pid=2441 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:31:21.908000 audit[2441]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc1807c210 a2=0 a3=0 items=0 ppid=2416 pid=2441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:21.908000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 15 00:31:21.909000 audit[2440]: NETFILTER_CFG table=mangle:48 family=10 entries=2 op=nft_register_chain pid=2440 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:31:21.909000 audit[2440]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe756fedd0 a2=0 a3=0 items=0 ppid=2416 pid=2440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:21.909000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 15 00:31:21.913917 kubelet[2416]: I0115 00:31:21.912663 2416 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 15 00:31:21.913917 kubelet[2416]: I0115 00:31:21.912707 2416 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 15 00:31:21.913917 kubelet[2416]: I0115 00:31:21.912734 2416 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 15 00:31:21.913917 kubelet[2416]: I0115 00:31:21.912741 2416 kubelet.go:2382] "Starting kubelet main sync loop" Jan 15 00:31:21.913917 kubelet[2416]: E0115 00:31:21.912799 2416 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 15 00:31:21.914622 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 15 00:31:21.916000 audit[2443]: NETFILTER_CFG table=nat:49 family=2 entries=1 op=nft_register_chain pid=2443 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:31:21.916000 audit[2443]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffec57312c0 a2=0 a3=0 items=0 ppid=2416 pid=2443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:21.916000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 15 00:31:21.918357 kubelet[2416]: W0115 00:31:21.917955 2416 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://164.92.64.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 164.92.64.55:6443: connect: connection refused Jan 15 00:31:21.918357 kubelet[2416]: E0115 00:31:21.918053 2416 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://164.92.64.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 164.92.64.55:6443: connect: connection refused" logger="UnhandledError" Jan 15 00:31:21.918000 audit[2444]: NETFILTER_CFG table=mangle:50 family=10 entries=1 op=nft_register_chain pid=2444 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:31:21.918000 audit[2444]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc758fbfa0 a2=0 a3=0 items=0 ppid=2416 pid=2444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:21.918000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 15 00:31:21.921000 audit[2445]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_chain pid=2445 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:31:21.921000 audit[2445]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffff5836f10 a2=0 a3=0 items=0 ppid=2416 pid=2445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:21.921000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 15 00:31:21.923000 audit[2447]: NETFILTER_CFG table=nat:52 family=10 entries=1 op=nft_register_chain pid=2447 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:31:21.923000 audit[2447]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe9ac2bf90 a2=0 a3=0 items=0 ppid=2416 pid=2447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:21.923000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 15 00:31:21.926000 audit[2448]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2448 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:31:21.926000 audit[2448]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd61e5a7a0 a2=0 a3=0 items=0 ppid=2416 pid=2448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:21.926000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 15 00:31:21.929433 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 15 00:31:21.934013 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 15 00:31:21.946523 kubelet[2416]: I0115 00:31:21.946471 2416 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 15 00:31:21.946777 kubelet[2416]: I0115 00:31:21.946758 2416 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 15 00:31:21.946840 kubelet[2416]: I0115 00:31:21.946780 2416 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 15 00:31:21.947744 kubelet[2416]: I0115 00:31:21.947707 2416 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 15 00:31:21.950540 kubelet[2416]: E0115 00:31:21.950509 2416 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 15 00:31:21.950632 kubelet[2416]: E0115 00:31:21.950584 2416 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4515.1.0-n-4ecc98c3fd\" not found" Jan 15 00:31:22.027090 systemd[1]: Created slice kubepods-burstable-podcfc7505063bf5c436d4bfb71721f3131.slice - libcontainer container kubepods-burstable-podcfc7505063bf5c436d4bfb71721f3131.slice. Jan 15 00:31:22.048489 kubelet[2416]: I0115 00:31:22.048431 2416 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:31:22.048910 kubelet[2416]: E0115 00:31:22.048883 2416 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://164.92.64.55:6443/api/v1/nodes\": dial tcp 164.92.64.55:6443: connect: connection refused" node="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:31:22.058163 kubelet[2416]: E0115 00:31:22.058110 2416 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-n-4ecc98c3fd\" not found" node="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:31:22.061475 systemd[1]: Created slice kubepods-burstable-podae4fbc9151d3d5445ed122cef62a8611.slice - libcontainer container kubepods-burstable-podae4fbc9151d3d5445ed122cef62a8611.slice. Jan 15 00:31:22.064173 kubelet[2416]: E0115 00:31:22.064125 2416 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-n-4ecc98c3fd\" not found" node="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:31:22.064969 kubelet[2416]: E0115 00:31:22.064929 2416 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://164.92.64.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515.1.0-n-4ecc98c3fd?timeout=10s\": dial tcp 164.92.64.55:6443: connect: connection refused" interval="400ms" Jan 15 00:31:22.065381 kubelet[2416]: I0115 00:31:22.065067 2416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/462de88fbc3b634050b594e3daa38e9b-kubeconfig\") pod \"kube-scheduler-ci-4515.1.0-n-4ecc98c3fd\" (UID: \"462de88fbc3b634050b594e3daa38e9b\") " pod="kube-system/kube-scheduler-ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:31:22.065381 kubelet[2416]: I0115 00:31:22.065095 2416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ae4fbc9151d3d5445ed122cef62a8611-flexvolume-dir\") pod \"kube-controller-manager-ci-4515.1.0-n-4ecc98c3fd\" (UID: \"ae4fbc9151d3d5445ed122cef62a8611\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:31:22.065381 kubelet[2416]: I0115 00:31:22.065116 2416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cfc7505063bf5c436d4bfb71721f3131-ca-certs\") pod \"kube-apiserver-ci-4515.1.0-n-4ecc98c3fd\" (UID: \"cfc7505063bf5c436d4bfb71721f3131\") " pod="kube-system/kube-apiserver-ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:31:22.065381 kubelet[2416]: I0115 00:31:22.065131 2416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cfc7505063bf5c436d4bfb71721f3131-k8s-certs\") pod \"kube-apiserver-ci-4515.1.0-n-4ecc98c3fd\" (UID: \"cfc7505063bf5c436d4bfb71721f3131\") " pod="kube-system/kube-apiserver-ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:31:22.065381 kubelet[2416]: I0115 00:31:22.065146 2416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cfc7505063bf5c436d4bfb71721f3131-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4515.1.0-n-4ecc98c3fd\" (UID: \"cfc7505063bf5c436d4bfb71721f3131\") " pod="kube-system/kube-apiserver-ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:31:22.065530 kubelet[2416]: I0115 00:31:22.065177 2416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ae4fbc9151d3d5445ed122cef62a8611-ca-certs\") pod \"kube-controller-manager-ci-4515.1.0-n-4ecc98c3fd\" (UID: \"ae4fbc9151d3d5445ed122cef62a8611\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:31:22.065530 kubelet[2416]: I0115 00:31:22.065195 2416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ae4fbc9151d3d5445ed122cef62a8611-k8s-certs\") pod \"kube-controller-manager-ci-4515.1.0-n-4ecc98c3fd\" (UID: \"ae4fbc9151d3d5445ed122cef62a8611\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:31:22.065530 kubelet[2416]: I0115 00:31:22.065215 2416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ae4fbc9151d3d5445ed122cef62a8611-kubeconfig\") pod \"kube-controller-manager-ci-4515.1.0-n-4ecc98c3fd\" (UID: \"ae4fbc9151d3d5445ed122cef62a8611\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:31:22.065530 kubelet[2416]: I0115 00:31:22.065234 2416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ae4fbc9151d3d5445ed122cef62a8611-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4515.1.0-n-4ecc98c3fd\" (UID: \"ae4fbc9151d3d5445ed122cef62a8611\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:31:22.068418 systemd[1]: Created slice kubepods-burstable-pod462de88fbc3b634050b594e3daa38e9b.slice - libcontainer container kubepods-burstable-pod462de88fbc3b634050b594e3daa38e9b.slice. Jan 15 00:31:22.071044 kubelet[2416]: E0115 00:31:22.070853 2416 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-n-4ecc98c3fd\" not found" node="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:31:22.251197 kubelet[2416]: I0115 00:31:22.251153 2416 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:31:22.251672 kubelet[2416]: E0115 00:31:22.251587 2416 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://164.92.64.55:6443/api/v1/nodes\": dial tcp 164.92.64.55:6443: connect: connection refused" node="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:31:22.359599 kubelet[2416]: E0115 00:31:22.359434 2416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 15 00:31:22.360944 containerd[1616]: time="2026-01-15T00:31:22.360900901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4515.1.0-n-4ecc98c3fd,Uid:cfc7505063bf5c436d4bfb71721f3131,Namespace:kube-system,Attempt:0,}" Jan 15 00:31:22.364919 kubelet[2416]: E0115 00:31:22.364854 2416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 15 00:31:22.367189 containerd[1616]: time="2026-01-15T00:31:22.366895268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4515.1.0-n-4ecc98c3fd,Uid:ae4fbc9151d3d5445ed122cef62a8611,Namespace:kube-system,Attempt:0,}" Jan 15 00:31:22.372641 kubelet[2416]: E0115 00:31:22.372600 2416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 15 00:31:22.373420 containerd[1616]: time="2026-01-15T00:31:22.373359959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4515.1.0-n-4ecc98c3fd,Uid:462de88fbc3b634050b594e3daa38e9b,Namespace:kube-system,Attempt:0,}" Jan 15 00:31:22.479345 kubelet[2416]: E0115 00:31:22.479278 2416 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://164.92.64.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515.1.0-n-4ecc98c3fd?timeout=10s\": dial tcp 164.92.64.55:6443: connect: connection refused" interval="800ms" Jan 15 00:31:22.490320 containerd[1616]: time="2026-01-15T00:31:22.490230977Z" level=info msg="connecting to shim 41b6de69750b7edb67f02972b9f31efd488d3813b76e1c88e226ba2b571e1523" address="unix:///run/containerd/s/ba33707f4c3ea1851264c54ac495f2c7606c1d74ffdce7d3c40ea1fa7b5e8bd0" namespace=k8s.io protocol=ttrpc version=3 Jan 15 00:31:22.491236 containerd[1616]: time="2026-01-15T00:31:22.491164289Z" level=info msg="connecting to shim dbbabc788f988c79c164362a0135ee522a4aec2a0513209a4ca440876ce84704" address="unix:///run/containerd/s/af7bf093daf7d0cd45f094b0b51f0e191af6ec8f59730dac3031b17398329279" namespace=k8s.io protocol=ttrpc version=3 Jan 15 00:31:22.499522 containerd[1616]: time="2026-01-15T00:31:22.499454192Z" level=info msg="connecting to shim 503019e7c900ec374b667b566b5d0609baccddf7ef68d5bab57c2d22689670e8" address="unix:///run/containerd/s/0d29bb9895ca2ef61e19865f3d7a170645bd55fb0f3b4c3831c816af14f80122" namespace=k8s.io protocol=ttrpc version=3 Jan 15 00:31:22.613388 systemd[1]: Started cri-containerd-41b6de69750b7edb67f02972b9f31efd488d3813b76e1c88e226ba2b571e1523.scope - libcontainer container 41b6de69750b7edb67f02972b9f31efd488d3813b76e1c88e226ba2b571e1523. Jan 15 00:31:22.615787 systemd[1]: Started cri-containerd-503019e7c900ec374b667b566b5d0609baccddf7ef68d5bab57c2d22689670e8.scope - libcontainer container 503019e7c900ec374b667b566b5d0609baccddf7ef68d5bab57c2d22689670e8. Jan 15 00:31:22.618337 systemd[1]: Started cri-containerd-dbbabc788f988c79c164362a0135ee522a4aec2a0513209a4ca440876ce84704.scope - libcontainer container dbbabc788f988c79c164362a0135ee522a4aec2a0513209a4ca440876ce84704. Jan 15 00:31:22.642000 audit: BPF prog-id=83 op=LOAD Jan 15 00:31:22.643000 audit: BPF prog-id=84 op=LOAD Jan 15 00:31:22.643000 audit[2508]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017e238 a2=98 a3=0 items=0 ppid=2474 pid=2508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:22.643000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431623664653639373530623765646236376630323937326239663331 Jan 15 00:31:22.643000 audit: BPF prog-id=84 op=UNLOAD Jan 15 00:31:22.643000 audit[2508]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2474 pid=2508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:22.643000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431623664653639373530623765646236376630323937326239663331 Jan 15 00:31:22.645000 audit: BPF prog-id=85 op=LOAD Jan 15 00:31:22.646000 audit: BPF prog-id=86 op=LOAD Jan 15 00:31:22.646000 audit[2508]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017e488 a2=98 a3=0 items=0 ppid=2474 pid=2508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:22.646000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431623664653639373530623765646236376630323937326239663331 Jan 15 00:31:22.646000 audit: BPF prog-id=87 op=LOAD Jan 15 00:31:22.646000 audit[2510]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=2485 pid=2510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:22.646000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530333031396537633930306563333734623636376235363662356430 Jan 15 00:31:22.646000 audit: BPF prog-id=87 op=UNLOAD Jan 15 00:31:22.646000 audit[2510]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2485 pid=2510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:22.646000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530333031396537633930306563333734623636376235363662356430 Jan 15 00:31:22.646000 audit: BPF prog-id=88 op=LOAD Jan 15 00:31:22.646000 audit[2510]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=2485 pid=2510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:22.646000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530333031396537633930306563333734623636376235363662356430 Jan 15 00:31:22.646000 audit: BPF prog-id=89 op=LOAD Jan 15 00:31:22.646000 audit[2510]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=2485 pid=2510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:22.646000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530333031396537633930306563333734623636376235363662356430 Jan 15 00:31:22.646000 audit: BPF prog-id=89 op=UNLOAD Jan 15 00:31:22.646000 audit[2510]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2485 pid=2510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:22.646000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530333031396537633930306563333734623636376235363662356430 Jan 15 00:31:22.647000 audit: BPF prog-id=88 op=UNLOAD Jan 15 00:31:22.647000 audit[2510]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2485 pid=2510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:22.647000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530333031396537633930306563333734623636376235363662356430 Jan 15 00:31:22.647000 audit: BPF prog-id=90 op=LOAD Jan 15 00:31:22.647000 audit[2508]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017e218 a2=98 a3=0 items=0 ppid=2474 pid=2508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:22.647000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431623664653639373530623765646236376630323937326239663331 Jan 15 00:31:22.647000 audit: BPF prog-id=90 op=UNLOAD Jan 15 00:31:22.647000 audit[2508]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2474 pid=2508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:22.647000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431623664653639373530623765646236376630323937326239663331 Jan 15 00:31:22.647000 audit: BPF prog-id=86 op=UNLOAD Jan 15 00:31:22.647000 audit[2508]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2474 pid=2508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:22.647000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431623664653639373530623765646236376630323937326239663331 Jan 15 00:31:22.647000 audit: BPF prog-id=91 op=LOAD Jan 15 00:31:22.647000 audit[2508]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017e6e8 a2=98 a3=0 items=0 ppid=2474 pid=2508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:22.647000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431623664653639373530623765646236376630323937326239663331 Jan 15 00:31:22.647000 audit: BPF prog-id=92 op=LOAD Jan 15 00:31:22.647000 audit[2510]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=2485 pid=2510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:22.647000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530333031396537633930306563333734623636376235363662356430 Jan 15 00:31:22.651000 audit: BPF prog-id=93 op=LOAD Jan 15 00:31:22.652000 audit: BPF prog-id=94 op=LOAD Jan 15 00:31:22.652000 audit[2506]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2473 pid=2506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:22.652000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462626162633738386639383863373963313634333632613031333565 Jan 15 00:31:22.653000 audit: BPF prog-id=94 op=UNLOAD Jan 15 00:31:22.653000 audit[2506]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2473 pid=2506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:22.653000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462626162633738386639383863373963313634333632613031333565 Jan 15 00:31:22.653000 audit: BPF prog-id=95 op=LOAD Jan 15 00:31:22.653000 audit[2506]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2473 pid=2506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:22.653000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462626162633738386639383863373963313634333632613031333565 Jan 15 00:31:22.653000 audit: BPF prog-id=96 op=LOAD Jan 15 00:31:22.653000 audit[2506]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2473 pid=2506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:22.653000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462626162633738386639383863373963313634333632613031333565 Jan 15 00:31:22.653000 audit: BPF prog-id=96 op=UNLOAD Jan 15 00:31:22.653000 audit[2506]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2473 pid=2506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:22.653000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462626162633738386639383863373963313634333632613031333565 Jan 15 00:31:22.653000 audit: BPF prog-id=95 op=UNLOAD Jan 15 00:31:22.653000 audit[2506]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2473 pid=2506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:22.653000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462626162633738386639383863373963313634333632613031333565 Jan 15 00:31:22.654000 audit: BPF prog-id=97 op=LOAD Jan 15 00:31:22.654000 audit[2506]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2473 pid=2506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:22.654000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462626162633738386639383863373963313634333632613031333565 Jan 15 00:31:22.661410 kubelet[2416]: I0115 00:31:22.654971 2416 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:31:22.662088 kubelet[2416]: E0115 00:31:22.661791 2416 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://164.92.64.55:6443/api/v1/nodes\": dial tcp 164.92.64.55:6443: connect: connection refused" node="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:31:22.734929 containerd[1616]: time="2026-01-15T00:31:22.734682429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4515.1.0-n-4ecc98c3fd,Uid:cfc7505063bf5c436d4bfb71721f3131,Namespace:kube-system,Attempt:0,} returns sandbox id \"41b6de69750b7edb67f02972b9f31efd488d3813b76e1c88e226ba2b571e1523\"" Jan 15 00:31:22.737328 kubelet[2416]: E0115 00:31:22.737291 2416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 15 00:31:22.738616 containerd[1616]: time="2026-01-15T00:31:22.738546844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4515.1.0-n-4ecc98c3fd,Uid:ae4fbc9151d3d5445ed122cef62a8611,Namespace:kube-system,Attempt:0,} returns sandbox id \"503019e7c900ec374b667b566b5d0609baccddf7ef68d5bab57c2d22689670e8\"" Jan 15 00:31:22.741909 kubelet[2416]: E0115 00:31:22.741827 2416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 15 00:31:22.744480 containerd[1616]: time="2026-01-15T00:31:22.744426979Z" level=info msg="CreateContainer within sandbox \"41b6de69750b7edb67f02972b9f31efd488d3813b76e1c88e226ba2b571e1523\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 15 00:31:22.745829 containerd[1616]: time="2026-01-15T00:31:22.745772718Z" level=info msg="CreateContainer within sandbox \"503019e7c900ec374b667b566b5d0609baccddf7ef68d5bab57c2d22689670e8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 15 00:31:22.766589 containerd[1616]: time="2026-01-15T00:31:22.766546303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4515.1.0-n-4ecc98c3fd,Uid:462de88fbc3b634050b594e3daa38e9b,Namespace:kube-system,Attempt:0,} returns sandbox id \"dbbabc788f988c79c164362a0135ee522a4aec2a0513209a4ca440876ce84704\"" Jan 15 00:31:22.767802 containerd[1616]: time="2026-01-15T00:31:22.767739678Z" level=info msg="Container 5c71a551cbe0ae511a1ec088cf47ec50a0b41b0305da2096ff4ddc2a2f1131ac: CDI devices from CRI Config.CDIDevices: []" Jan 15 00:31:22.768299 containerd[1616]: time="2026-01-15T00:31:22.768199706Z" level=info msg="Container 08d32fff1140365c06aaa2bda99925807fd9b1f98eeddb11a1618db015c19a27: CDI devices from CRI Config.CDIDevices: []" Jan 15 00:31:22.768806 kubelet[2416]: E0115 00:31:22.768704 2416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 15 00:31:22.771947 containerd[1616]: time="2026-01-15T00:31:22.771884844Z" level=info msg="CreateContainer within sandbox \"dbbabc788f988c79c164362a0135ee522a4aec2a0513209a4ca440876ce84704\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 15 00:31:22.776875 containerd[1616]: time="2026-01-15T00:31:22.776804846Z" level=info msg="CreateContainer within sandbox \"503019e7c900ec374b667b566b5d0609baccddf7ef68d5bab57c2d22689670e8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"08d32fff1140365c06aaa2bda99925807fd9b1f98eeddb11a1618db015c19a27\"" Jan 15 00:31:22.780312 containerd[1616]: time="2026-01-15T00:31:22.780269929Z" level=info msg="StartContainer for \"08d32fff1140365c06aaa2bda99925807fd9b1f98eeddb11a1618db015c19a27\"" Jan 15 00:31:22.782934 containerd[1616]: time="2026-01-15T00:31:22.782883065Z" level=info msg="CreateContainer within sandbox \"41b6de69750b7edb67f02972b9f31efd488d3813b76e1c88e226ba2b571e1523\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5c71a551cbe0ae511a1ec088cf47ec50a0b41b0305da2096ff4ddc2a2f1131ac\"" Jan 15 00:31:22.783630 containerd[1616]: time="2026-01-15T00:31:22.783582856Z" level=info msg="connecting to shim 08d32fff1140365c06aaa2bda99925807fd9b1f98eeddb11a1618db015c19a27" address="unix:///run/containerd/s/0d29bb9895ca2ef61e19865f3d7a170645bd55fb0f3b4c3831c816af14f80122" protocol=ttrpc version=3 Jan 15 00:31:22.784762 containerd[1616]: time="2026-01-15T00:31:22.784723201Z" level=info msg="StartContainer for \"5c71a551cbe0ae511a1ec088cf47ec50a0b41b0305da2096ff4ddc2a2f1131ac\"" Jan 15 00:31:22.786401 containerd[1616]: time="2026-01-15T00:31:22.786360795Z" level=info msg="connecting to shim 5c71a551cbe0ae511a1ec088cf47ec50a0b41b0305da2096ff4ddc2a2f1131ac" address="unix:///run/containerd/s/ba33707f4c3ea1851264c54ac495f2c7606c1d74ffdce7d3c40ea1fa7b5e8bd0" protocol=ttrpc version=3 Jan 15 00:31:22.794573 containerd[1616]: time="2026-01-15T00:31:22.794429208Z" level=info msg="Container 48e58c7bbb9720896cbb93ca8ba9990e57fede8f2ecf55daf08a067656fc3bc9: CDI devices from CRI Config.CDIDevices: []" Jan 15 00:31:22.805509 containerd[1616]: time="2026-01-15T00:31:22.805452570Z" level=info msg="CreateContainer within sandbox \"dbbabc788f988c79c164362a0135ee522a4aec2a0513209a4ca440876ce84704\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"48e58c7bbb9720896cbb93ca8ba9990e57fede8f2ecf55daf08a067656fc3bc9\"" Jan 15 00:31:22.806724 containerd[1616]: time="2026-01-15T00:31:22.806674540Z" level=info msg="StartContainer for \"48e58c7bbb9720896cbb93ca8ba9990e57fede8f2ecf55daf08a067656fc3bc9\"" Jan 15 00:31:22.813831 containerd[1616]: time="2026-01-15T00:31:22.813766039Z" level=info msg="connecting to shim 48e58c7bbb9720896cbb93ca8ba9990e57fede8f2ecf55daf08a067656fc3bc9" address="unix:///run/containerd/s/af7bf093daf7d0cd45f094b0b51f0e191af6ec8f59730dac3031b17398329279" protocol=ttrpc version=3 Jan 15 00:31:22.823445 systemd[1]: Started cri-containerd-08d32fff1140365c06aaa2bda99925807fd9b1f98eeddb11a1618db015c19a27.scope - libcontainer container 08d32fff1140365c06aaa2bda99925807fd9b1f98eeddb11a1618db015c19a27. Jan 15 00:31:22.834770 systemd[1]: Started cri-containerd-5c71a551cbe0ae511a1ec088cf47ec50a0b41b0305da2096ff4ddc2a2f1131ac.scope - libcontainer container 5c71a551cbe0ae511a1ec088cf47ec50a0b41b0305da2096ff4ddc2a2f1131ac. Jan 15 00:31:22.864408 systemd[1]: Started cri-containerd-48e58c7bbb9720896cbb93ca8ba9990e57fede8f2ecf55daf08a067656fc3bc9.scope - libcontainer container 48e58c7bbb9720896cbb93ca8ba9990e57fede8f2ecf55daf08a067656fc3bc9. Jan 15 00:31:22.869000 audit: BPF prog-id=98 op=LOAD Jan 15 00:31:22.873000 audit: BPF prog-id=99 op=LOAD Jan 15 00:31:22.872000 audit: BPF prog-id=100 op=LOAD Jan 15 00:31:22.872000 audit[2589]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=2485 pid=2589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:22.872000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038643332666666313134303336356330366161613262646139393932 Jan 15 00:31:22.873000 audit: BPF prog-id=100 op=UNLOAD Jan 15 00:31:22.873000 audit[2589]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2485 pid=2589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:22.873000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038643332666666313134303336356330366161613262646139393932 Jan 15 00:31:22.873000 audit: BPF prog-id=101 op=LOAD Jan 15 00:31:22.873000 audit[2589]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=2485 pid=2589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:22.873000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038643332666666313134303336356330366161613262646139393932 Jan 15 00:31:22.873000 audit: BPF prog-id=102 op=LOAD Jan 15 00:31:22.873000 audit[2589]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=2485 pid=2589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:22.873000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038643332666666313134303336356330366161613262646139393932 Jan 15 00:31:22.873000 audit: BPF prog-id=102 op=UNLOAD Jan 15 00:31:22.873000 audit[2589]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2485 pid=2589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:22.873000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038643332666666313134303336356330366161613262646139393932 Jan 15 00:31:22.873000 audit: BPF prog-id=101 op=UNLOAD Jan 15 00:31:22.873000 audit[2589]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2485 pid=2589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:22.873000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038643332666666313134303336356330366161613262646139393932 Jan 15 00:31:22.874000 audit: BPF prog-id=103 op=LOAD Jan 15 00:31:22.874000 audit[2589]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=2485 pid=2589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:22.874000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038643332666666313134303336356330366161613262646139393932 Jan 15 00:31:22.876000 audit: BPF prog-id=104 op=LOAD Jan 15 00:31:22.876000 audit[2590]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=2474 pid=2590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:22.876000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563373161353531636265306165353131613165633038386366343765 Jan 15 00:31:22.876000 audit: BPF prog-id=104 op=UNLOAD Jan 15 00:31:22.876000 audit[2590]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2474 pid=2590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:22.876000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563373161353531636265306165353131613165633038386366343765 Jan 15 00:31:22.876000 audit: BPF prog-id=105 op=LOAD Jan 15 00:31:22.876000 audit[2590]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=2474 pid=2590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:22.876000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563373161353531636265306165353131613165633038386366343765 Jan 15 00:31:22.876000 audit: BPF prog-id=106 op=LOAD Jan 15 00:31:22.876000 audit[2590]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=2474 pid=2590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:22.876000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563373161353531636265306165353131613165633038386366343765 Jan 15 00:31:22.876000 audit: BPF prog-id=106 op=UNLOAD Jan 15 00:31:22.876000 audit[2590]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2474 pid=2590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:22.876000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563373161353531636265306165353131613165633038386366343765 Jan 15 00:31:22.877000 audit: BPF prog-id=105 op=UNLOAD Jan 15 00:31:22.877000 audit[2590]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2474 pid=2590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:22.877000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563373161353531636265306165353131613165633038386366343765 Jan 15 00:31:22.877000 audit: BPF prog-id=107 op=LOAD Jan 15 00:31:22.877000 audit[2590]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=2474 pid=2590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:22.877000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563373161353531636265306165353131613165633038386366343765 Jan 15 00:31:22.929000 audit: BPF prog-id=108 op=LOAD Jan 15 00:31:22.930000 audit: BPF prog-id=109 op=LOAD Jan 15 00:31:22.930000 audit[2612]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2473 pid=2612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:22.930000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438653538633762626239373230383936636262393363613862613939 Jan 15 00:31:22.930000 audit: BPF prog-id=109 op=UNLOAD Jan 15 00:31:22.930000 audit[2612]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2473 pid=2612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:22.930000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438653538633762626239373230383936636262393363613862613939 Jan 15 00:31:22.931000 audit: BPF prog-id=110 op=LOAD Jan 15 00:31:22.931000 audit[2612]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2473 pid=2612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:22.931000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438653538633762626239373230383936636262393363613862613939 Jan 15 00:31:22.931000 audit: BPF prog-id=111 op=LOAD Jan 15 00:31:22.931000 audit[2612]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2473 pid=2612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:22.931000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438653538633762626239373230383936636262393363613862613939 Jan 15 00:31:22.931000 audit: BPF prog-id=111 op=UNLOAD Jan 15 00:31:22.931000 audit[2612]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2473 pid=2612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:22.931000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438653538633762626239373230383936636262393363613862613939 Jan 15 00:31:22.931000 audit: BPF prog-id=110 op=UNLOAD Jan 15 00:31:22.931000 audit[2612]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2473 pid=2612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:22.931000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438653538633762626239373230383936636262393363613862613939 Jan 15 00:31:22.931000 audit: BPF prog-id=112 op=LOAD Jan 15 00:31:22.931000 audit[2612]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2473 pid=2612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:22.931000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438653538633762626239373230383936636262393363613862613939 Jan 15 00:31:22.969141 containerd[1616]: time="2026-01-15T00:31:22.969094746Z" level=info msg="StartContainer for \"5c71a551cbe0ae511a1ec088cf47ec50a0b41b0305da2096ff4ddc2a2f1131ac\" returns successfully" Jan 15 00:31:23.014917 containerd[1616]: time="2026-01-15T00:31:23.014862815Z" level=info msg="StartContainer for \"08d32fff1140365c06aaa2bda99925807fd9b1f98eeddb11a1618db015c19a27\" returns successfully" Jan 15 00:31:23.029400 containerd[1616]: time="2026-01-15T00:31:23.029288379Z" level=info msg="StartContainer for \"48e58c7bbb9720896cbb93ca8ba9990e57fede8f2ecf55daf08a067656fc3bc9\" returns successfully" Jan 15 00:31:23.130825 kubelet[2416]: W0115 00:31:23.130432 2416 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://164.92.64.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 164.92.64.55:6443: connect: connection refused Jan 15 00:31:23.133262 kubelet[2416]: E0115 00:31:23.133133 2416 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://164.92.64.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 164.92.64.55:6443: connect: connection refused" logger="UnhandledError" Jan 15 00:31:23.280835 kubelet[2416]: E0115 00:31:23.280766 2416 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://164.92.64.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515.1.0-n-4ecc98c3fd?timeout=10s\": dial tcp 164.92.64.55:6443: connect: connection refused" interval="1.6s" Jan 15 00:31:23.314798 kubelet[2416]: W0115 00:31:23.314649 2416 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://164.92.64.55:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 164.92.64.55:6443: connect: connection refused Jan 15 00:31:23.314798 kubelet[2416]: E0115 00:31:23.314758 2416 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://164.92.64.55:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 164.92.64.55:6443: connect: connection refused" logger="UnhandledError" Jan 15 00:31:23.466129 kubelet[2416]: I0115 00:31:23.463922 2416 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:31:23.983183 kubelet[2416]: E0115 00:31:23.982839 2416 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-n-4ecc98c3fd\" not found" node="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:31:23.984068 kubelet[2416]: E0115 00:31:23.984044 2416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 15 00:31:23.991326 kubelet[2416]: E0115 00:31:23.990775 2416 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-n-4ecc98c3fd\" not found" node="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:31:23.991326 kubelet[2416]: E0115 00:31:23.990933 2416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 15 00:31:24.000045 kubelet[2416]: E0115 00:31:23.998630 2416 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-n-4ecc98c3fd\" not found" node="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:31:24.000643 kubelet[2416]: E0115 00:31:24.000537 2416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 15 00:31:24.998907 kubelet[2416]: E0115 00:31:24.998870 2416 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-n-4ecc98c3fd\" not found" node="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:31:25.000482 kubelet[2416]: E0115 00:31:24.999602 2416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 15 00:31:25.003048 kubelet[2416]: E0115 00:31:25.000765 2416 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-n-4ecc98c3fd\" not found" node="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:31:25.003048 kubelet[2416]: E0115 00:31:25.000923 2416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 15 00:31:25.003048 kubelet[2416]: E0115 00:31:25.001306 2416 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-n-4ecc98c3fd\" not found" node="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:31:25.003048 kubelet[2416]: E0115 00:31:25.001448 2416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 15 00:31:25.820657 kubelet[2416]: I0115 00:31:25.820406 2416 apiserver.go:52] "Watching apiserver" Jan 15 00:31:25.847284 kubelet[2416]: E0115 00:31:25.847153 2416 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4515.1.0-n-4ecc98c3fd\" not found" node="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:31:25.864189 kubelet[2416]: I0115 00:31:25.864104 2416 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 15 00:31:25.909049 kubelet[2416]: I0115 00:31:25.907765 2416 kubelet_node_status.go:78] "Successfully registered node" node="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:31:25.909049 kubelet[2416]: E0115 00:31:25.907820 2416 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4515.1.0-n-4ecc98c3fd\": node \"ci-4515.1.0-n-4ecc98c3fd\" not found" Jan 15 00:31:25.965214 kubelet[2416]: I0115 00:31:25.964823 2416 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:31:25.979288 kubelet[2416]: E0115 00:31:25.979242 2416 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4515.1.0-n-4ecc98c3fd\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:31:25.979711 kubelet[2416]: I0115 00:31:25.979466 2416 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:31:25.987682 kubelet[2416]: E0115 00:31:25.987630 2416 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4515.1.0-n-4ecc98c3fd\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:31:25.988598 kubelet[2416]: I0115 00:31:25.987898 2416 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:31:25.998680 kubelet[2416]: E0115 00:31:25.998600 2416 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4515.1.0-n-4ecc98c3fd\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:31:26.001073 kubelet[2416]: I0115 00:31:26.000496 2416 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:31:26.001073 kubelet[2416]: I0115 00:31:26.000908 2416 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:31:26.008964 kubelet[2416]: E0115 00:31:26.008902 2416 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4515.1.0-n-4ecc98c3fd\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:31:26.009209 kubelet[2416]: E0115 00:31:26.009160 2416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 15 00:31:26.011591 kubelet[2416]: E0115 00:31:26.011284 2416 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4515.1.0-n-4ecc98c3fd\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:31:26.011591 kubelet[2416]: E0115 00:31:26.011508 2416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 15 00:31:27.003387 kubelet[2416]: I0115 00:31:27.003344 2416 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:31:27.004877 kubelet[2416]: I0115 00:31:27.003760 2416 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:31:27.011586 kubelet[2416]: W0115 00:31:27.011489 2416 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 15 00:31:27.011881 kubelet[2416]: E0115 00:31:27.011859 2416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 15 00:31:27.015864 kubelet[2416]: W0115 00:31:27.015486 2416 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 15 00:31:27.015864 kubelet[2416]: E0115 00:31:27.015782 2416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 15 00:31:28.005396 kubelet[2416]: E0115 00:31:28.005154 2416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 15 00:31:28.005396 kubelet[2416]: E0115 00:31:28.005307 2416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 15 00:31:28.084331 systemd[1]: Reload requested from client PID 2688 ('systemctl') (unit session-7.scope)... Jan 15 00:31:28.084351 systemd[1]: Reloading... Jan 15 00:31:28.216077 zram_generator::config[2734]: No configuration found. Jan 15 00:31:28.492820 systemd[1]: Reloading finished in 407 ms. Jan 15 00:31:28.520520 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 00:31:28.522261 kubelet[2416]: I0115 00:31:28.520518 2416 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 15 00:31:28.541731 systemd[1]: kubelet.service: Deactivated successfully. Jan 15 00:31:28.542296 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 00:31:28.541000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:31:28.542603 systemd[1]: kubelet.service: Consumed 1.242s CPU time, 128.4M memory peak. Jan 15 00:31:28.543531 kernel: kauditd_printk_skb: 158 callbacks suppressed Jan 15 00:31:28.543621 kernel: audit: type=1131 audit(1768437088.541:401): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:31:28.547366 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 00:31:28.551915 kernel: audit: type=1334 audit(1768437088.548:402): prog-id=113 op=LOAD Jan 15 00:31:28.552061 kernel: audit: type=1334 audit(1768437088.548:403): prog-id=79 op=UNLOAD Jan 15 00:31:28.548000 audit: BPF prog-id=113 op=LOAD Jan 15 00:31:28.548000 audit: BPF prog-id=79 op=UNLOAD Jan 15 00:31:28.551000 audit: BPF prog-id=114 op=LOAD Jan 15 00:31:28.554211 kernel: audit: type=1334 audit(1768437088.551:404): prog-id=114 op=LOAD Jan 15 00:31:28.551000 audit: BPF prog-id=73 op=UNLOAD Jan 15 00:31:28.557058 kernel: audit: type=1334 audit(1768437088.551:405): prog-id=73 op=UNLOAD Jan 15 00:31:28.557174 kernel: audit: type=1334 audit(1768437088.551:406): prog-id=115 op=LOAD Jan 15 00:31:28.551000 audit: BPF prog-id=115 op=LOAD Jan 15 00:31:28.551000 audit: BPF prog-id=116 op=LOAD Jan 15 00:31:28.559807 kernel: audit: type=1334 audit(1768437088.551:407): prog-id=116 op=LOAD Jan 15 00:31:28.559891 kernel: audit: type=1334 audit(1768437088.551:408): prog-id=74 op=UNLOAD Jan 15 00:31:28.551000 audit: BPF prog-id=74 op=UNLOAD Jan 15 00:31:28.561129 kernel: audit: type=1334 audit(1768437088.551:409): prog-id=75 op=UNLOAD Jan 15 00:31:28.551000 audit: BPF prog-id=75 op=UNLOAD Jan 15 00:31:28.562463 kernel: audit: type=1334 audit(1768437088.554:410): prog-id=117 op=LOAD Jan 15 00:31:28.554000 audit: BPF prog-id=117 op=LOAD Jan 15 00:31:28.554000 audit: BPF prog-id=118 op=LOAD Jan 15 00:31:28.554000 audit: BPF prog-id=80 op=UNLOAD Jan 15 00:31:28.554000 audit: BPF prog-id=81 op=UNLOAD Jan 15 00:31:28.558000 audit: BPF prog-id=119 op=LOAD Jan 15 00:31:28.558000 audit: BPF prog-id=82 op=UNLOAD Jan 15 00:31:28.559000 audit: BPF prog-id=120 op=LOAD Jan 15 00:31:28.559000 audit: BPF prog-id=63 op=UNLOAD Jan 15 00:31:28.560000 audit: BPF prog-id=121 op=LOAD Jan 15 00:31:28.560000 audit: BPF prog-id=122 op=LOAD Jan 15 00:31:28.560000 audit: BPF prog-id=64 op=UNLOAD Jan 15 00:31:28.560000 audit: BPF prog-id=65 op=UNLOAD Jan 15 00:31:28.561000 audit: BPF prog-id=123 op=LOAD Jan 15 00:31:28.561000 audit: BPF prog-id=66 op=UNLOAD Jan 15 00:31:28.561000 audit: BPF prog-id=124 op=LOAD Jan 15 00:31:28.561000 audit: BPF prog-id=125 op=LOAD Jan 15 00:31:28.561000 audit: BPF prog-id=67 op=UNLOAD Jan 15 00:31:28.561000 audit: BPF prog-id=68 op=UNLOAD Jan 15 00:31:28.582000 audit: BPF prog-id=126 op=LOAD Jan 15 00:31:28.582000 audit: BPF prog-id=72 op=UNLOAD Jan 15 00:31:28.583000 audit: BPF prog-id=127 op=LOAD Jan 15 00:31:28.583000 audit: BPF prog-id=76 op=UNLOAD Jan 15 00:31:28.583000 audit: BPF prog-id=128 op=LOAD Jan 15 00:31:28.583000 audit: BPF prog-id=129 op=LOAD Jan 15 00:31:28.583000 audit: BPF prog-id=77 op=UNLOAD Jan 15 00:31:28.583000 audit: BPF prog-id=78 op=UNLOAD Jan 15 00:31:28.584000 audit: BPF prog-id=130 op=LOAD Jan 15 00:31:28.584000 audit: BPF prog-id=69 op=UNLOAD Jan 15 00:31:28.584000 audit: BPF prog-id=131 op=LOAD Jan 15 00:31:28.584000 audit: BPF prog-id=132 op=LOAD Jan 15 00:31:28.584000 audit: BPF prog-id=70 op=UNLOAD Jan 15 00:31:28.584000 audit: BPF prog-id=71 op=UNLOAD Jan 15 00:31:28.755746 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 00:31:28.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:31:28.770543 (kubelet)[2785]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 15 00:31:28.850963 kubelet[2785]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 15 00:31:28.850963 kubelet[2785]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 15 00:31:28.850963 kubelet[2785]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 15 00:31:28.853071 kubelet[2785]: I0115 00:31:28.851683 2785 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 15 00:31:28.861823 kubelet[2785]: I0115 00:31:28.861780 2785 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 15 00:31:28.862008 kubelet[2785]: I0115 00:31:28.861997 2785 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 15 00:31:28.862912 kubelet[2785]: I0115 00:31:28.862882 2785 server.go:954] "Client rotation is on, will bootstrap in background" Jan 15 00:31:28.868105 kubelet[2785]: I0115 00:31:28.867994 2785 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 15 00:31:28.880471 kubelet[2785]: I0115 00:31:28.880417 2785 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 15 00:31:28.890831 kubelet[2785]: I0115 00:31:28.890788 2785 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 15 00:31:28.898716 kubelet[2785]: I0115 00:31:28.898596 2785 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 15 00:31:28.899497 kubelet[2785]: I0115 00:31:28.899099 2785 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 15 00:31:28.899497 kubelet[2785]: I0115 00:31:28.899157 2785 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4515.1.0-n-4ecc98c3fd","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 15 00:31:28.899497 kubelet[2785]: I0115 00:31:28.899378 2785 topology_manager.go:138] "Creating topology manager with none policy" Jan 15 00:31:28.899497 kubelet[2785]: I0115 00:31:28.899388 2785 container_manager_linux.go:304] "Creating device plugin manager" Jan 15 00:31:28.899740 kubelet[2785]: I0115 00:31:28.899451 2785 state_mem.go:36] "Initialized new in-memory state store" Jan 15 00:31:28.899958 kubelet[2785]: I0115 00:31:28.899944 2785 kubelet.go:446] "Attempting to sync node with API server" Jan 15 00:31:28.900084 kubelet[2785]: I0115 00:31:28.900071 2785 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 15 00:31:28.900177 kubelet[2785]: I0115 00:31:28.900168 2785 kubelet.go:352] "Adding apiserver pod source" Jan 15 00:31:28.900232 kubelet[2785]: I0115 00:31:28.900225 2785 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 15 00:31:28.902917 kubelet[2785]: I0115 00:31:28.902873 2785 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 15 00:31:28.903360 kubelet[2785]: I0115 00:31:28.903341 2785 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 15 00:31:28.904465 kubelet[2785]: I0115 00:31:28.904435 2785 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 15 00:31:28.904573 kubelet[2785]: I0115 00:31:28.904475 2785 server.go:1287] "Started kubelet" Jan 15 00:31:28.910064 kubelet[2785]: I0115 00:31:28.909384 2785 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 15 00:31:28.910997 kubelet[2785]: I0115 00:31:28.910967 2785 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 15 00:31:28.911208 kubelet[2785]: I0115 00:31:28.911186 2785 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 15 00:31:28.912092 kubelet[2785]: I0115 00:31:28.912068 2785 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 15 00:31:28.914051 kubelet[2785]: I0115 00:31:28.912476 2785 server.go:479] "Adding debug handlers to kubelet server" Jan 15 00:31:28.918026 kubelet[2785]: I0115 00:31:28.917970 2785 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 15 00:31:28.941149 kubelet[2785]: I0115 00:31:28.934523 2785 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 15 00:31:28.941149 kubelet[2785]: I0115 00:31:28.934536 2785 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 15 00:31:28.941149 kubelet[2785]: E0115 00:31:28.934746 2785 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-n-4ecc98c3fd\" not found" Jan 15 00:31:28.941149 kubelet[2785]: I0115 00:31:28.940792 2785 reconciler.go:26] "Reconciler: start to sync state" Jan 15 00:31:28.945151 kubelet[2785]: I0115 00:31:28.945109 2785 factory.go:221] Registration of the systemd container factory successfully Jan 15 00:31:28.945574 kubelet[2785]: I0115 00:31:28.945544 2785 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 15 00:31:28.950968 kubelet[2785]: I0115 00:31:28.950922 2785 factory.go:221] Registration of the containerd container factory successfully Jan 15 00:31:28.968158 kubelet[2785]: E0115 00:31:28.968100 2785 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 15 00:31:28.983060 kubelet[2785]: I0115 00:31:28.981957 2785 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 15 00:31:28.984128 kubelet[2785]: I0115 00:31:28.984046 2785 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 15 00:31:28.984128 kubelet[2785]: I0115 00:31:28.984089 2785 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 15 00:31:28.984370 kubelet[2785]: I0115 00:31:28.984326 2785 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 15 00:31:28.984551 kubelet[2785]: I0115 00:31:28.984539 2785 kubelet.go:2382] "Starting kubelet main sync loop" Jan 15 00:31:28.984885 kubelet[2785]: E0115 00:31:28.984854 2785 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 15 00:31:29.060497 kubelet[2785]: I0115 00:31:29.060107 2785 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 15 00:31:29.060497 kubelet[2785]: I0115 00:31:29.060422 2785 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 15 00:31:29.060497 kubelet[2785]: I0115 00:31:29.060452 2785 state_mem.go:36] "Initialized new in-memory state store" Jan 15 00:31:29.061379 kubelet[2785]: I0115 00:31:29.061243 2785 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 15 00:31:29.061542 kubelet[2785]: I0115 00:31:29.061262 2785 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 15 00:31:29.061542 kubelet[2785]: I0115 00:31:29.061526 2785 policy_none.go:49] "None policy: Start" Jan 15 00:31:29.061773 kubelet[2785]: I0115 00:31:29.061611 2785 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 15 00:31:29.061773 kubelet[2785]: I0115 00:31:29.061626 2785 state_mem.go:35] "Initializing new in-memory state store" Jan 15 00:31:29.062190 kubelet[2785]: I0115 00:31:29.062093 2785 state_mem.go:75] "Updated machine memory state" Jan 15 00:31:29.071876 kubelet[2785]: I0115 00:31:29.071835 2785 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 15 00:31:29.073196 kubelet[2785]: I0115 00:31:29.073165 2785 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 15 00:31:29.073699 kubelet[2785]: I0115 00:31:29.073393 2785 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 15 00:31:29.074521 kubelet[2785]: I0115 00:31:29.074276 2785 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 15 00:31:29.079172 kubelet[2785]: E0115 00:31:29.078004 2785 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 15 00:31:29.092109 kubelet[2785]: I0115 00:31:29.092064 2785 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:31:29.093113 kubelet[2785]: I0115 00:31:29.093079 2785 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:31:29.093613 kubelet[2785]: I0115 00:31:29.093579 2785 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:31:29.112190 kubelet[2785]: W0115 00:31:29.111298 2785 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 15 00:31:29.117043 kubelet[2785]: W0115 00:31:29.116931 2785 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 15 00:31:29.117307 kubelet[2785]: E0115 00:31:29.117287 2785 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4515.1.0-n-4ecc98c3fd\" already exists" pod="kube-system/kube-scheduler-ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:31:29.117543 kubelet[2785]: W0115 00:31:29.117454 2785 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 15 00:31:29.119105 kubelet[2785]: E0115 00:31:29.117531 2785 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4515.1.0-n-4ecc98c3fd\" already exists" pod="kube-system/kube-apiserver-ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:31:29.186304 kubelet[2785]: I0115 00:31:29.185605 2785 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:31:29.198060 kubelet[2785]: I0115 00:31:29.197540 2785 kubelet_node_status.go:124] "Node was previously registered" node="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:31:29.198060 kubelet[2785]: I0115 00:31:29.197675 2785 kubelet_node_status.go:78] "Successfully registered node" node="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:31:29.242361 kubelet[2785]: I0115 00:31:29.242234 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cfc7505063bf5c436d4bfb71721f3131-ca-certs\") pod \"kube-apiserver-ci-4515.1.0-n-4ecc98c3fd\" (UID: \"cfc7505063bf5c436d4bfb71721f3131\") " pod="kube-system/kube-apiserver-ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:31:29.242796 kubelet[2785]: I0115 00:31:29.242456 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ae4fbc9151d3d5445ed122cef62a8611-flexvolume-dir\") pod \"kube-controller-manager-ci-4515.1.0-n-4ecc98c3fd\" (UID: \"ae4fbc9151d3d5445ed122cef62a8611\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:31:29.242796 kubelet[2785]: I0115 00:31:29.242482 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ae4fbc9151d3d5445ed122cef62a8611-kubeconfig\") pod \"kube-controller-manager-ci-4515.1.0-n-4ecc98c3fd\" (UID: \"ae4fbc9151d3d5445ed122cef62a8611\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:31:29.242796 kubelet[2785]: I0115 00:31:29.242707 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ae4fbc9151d3d5445ed122cef62a8611-k8s-certs\") pod \"kube-controller-manager-ci-4515.1.0-n-4ecc98c3fd\" (UID: \"ae4fbc9151d3d5445ed122cef62a8611\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:31:29.243103 kubelet[2785]: I0115 00:31:29.242992 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ae4fbc9151d3d5445ed122cef62a8611-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4515.1.0-n-4ecc98c3fd\" (UID: \"ae4fbc9151d3d5445ed122cef62a8611\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:31:29.243103 kubelet[2785]: I0115 00:31:29.243050 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/462de88fbc3b634050b594e3daa38e9b-kubeconfig\") pod \"kube-scheduler-ci-4515.1.0-n-4ecc98c3fd\" (UID: \"462de88fbc3b634050b594e3daa38e9b\") " pod="kube-system/kube-scheduler-ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:31:29.243103 kubelet[2785]: I0115 00:31:29.243073 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cfc7505063bf5c436d4bfb71721f3131-k8s-certs\") pod \"kube-apiserver-ci-4515.1.0-n-4ecc98c3fd\" (UID: \"cfc7505063bf5c436d4bfb71721f3131\") " pod="kube-system/kube-apiserver-ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:31:29.243285 kubelet[2785]: I0115 00:31:29.243089 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cfc7505063bf5c436d4bfb71721f3131-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4515.1.0-n-4ecc98c3fd\" (UID: \"cfc7505063bf5c436d4bfb71721f3131\") " pod="kube-system/kube-apiserver-ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:31:29.243285 kubelet[2785]: I0115 00:31:29.243260 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ae4fbc9151d3d5445ed122cef62a8611-ca-certs\") pod \"kube-controller-manager-ci-4515.1.0-n-4ecc98c3fd\" (UID: \"ae4fbc9151d3d5445ed122cef62a8611\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:31:29.412843 kubelet[2785]: E0115 00:31:29.412443 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 15 00:31:29.419280 kubelet[2785]: E0115 00:31:29.418482 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 15 00:31:29.420502 kubelet[2785]: E0115 00:31:29.419413 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 15 00:31:29.922385 kubelet[2785]: I0115 00:31:29.922311 2785 apiserver.go:52] "Watching apiserver" Jan 15 00:31:29.941108 kubelet[2785]: I0115 00:31:29.940882 2785 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 15 00:31:30.020853 kubelet[2785]: E0115 00:31:30.020788 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 15 00:31:30.023564 kubelet[2785]: E0115 00:31:30.022677 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 15 00:31:30.023854 kubelet[2785]: I0115 00:31:30.022911 2785 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:31:30.040887 kubelet[2785]: W0115 00:31:30.040660 2785 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 15 00:31:30.041469 kubelet[2785]: E0115 00:31:30.041273 2785 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4515.1.0-n-4ecc98c3fd\" already exists" pod="kube-system/kube-controller-manager-ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:31:30.042578 kubelet[2785]: E0115 00:31:30.042524 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 15 00:31:30.079183 kubelet[2785]: I0115 00:31:30.079102 2785 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4515.1.0-n-4ecc98c3fd" podStartSLOduration=1.07907779 podStartE2EDuration="1.07907779s" podCreationTimestamp="2026-01-15 00:31:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-15 00:31:30.077824624 +0000 UTC m=+1.294087891" watchObservedRunningTime="2026-01-15 00:31:30.07907779 +0000 UTC m=+1.295341053" Jan 15 00:31:30.091277 kubelet[2785]: I0115 00:31:30.089861 2785 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4515.1.0-n-4ecc98c3fd" podStartSLOduration=3.089827736 podStartE2EDuration="3.089827736s" podCreationTimestamp="2026-01-15 00:31:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-15 00:31:30.089326928 +0000 UTC m=+1.305590197" watchObservedRunningTime="2026-01-15 00:31:30.089827736 +0000 UTC m=+1.306091007" Jan 15 00:31:30.104433 kubelet[2785]: I0115 00:31:30.104067 2785 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4515.1.0-n-4ecc98c3fd" podStartSLOduration=3.104043288 podStartE2EDuration="3.104043288s" podCreationTimestamp="2026-01-15 00:31:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-15 00:31:30.103519531 +0000 UTC m=+1.319782808" watchObservedRunningTime="2026-01-15 00:31:30.104043288 +0000 UTC m=+1.320306557" Jan 15 00:31:31.024459 kubelet[2785]: E0115 00:31:31.023071 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 15 00:31:31.024459 kubelet[2785]: E0115 00:31:31.023448 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 15 00:31:31.024459 kubelet[2785]: E0115 00:31:31.023932 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 15 00:31:32.024618 kubelet[2785]: E0115 00:31:32.024563 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 15 00:31:33.452893 kubelet[2785]: I0115 00:31:33.452838 2785 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 15 00:31:33.453831 containerd[1616]: time="2026-01-15T00:31:33.453773292Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 15 00:31:33.454621 kubelet[2785]: I0115 00:31:33.454283 2785 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 15 00:31:34.002700 systemd[1]: Created slice kubepods-besteffort-pod1f83444d_ee91_4b66_bbbc_fb67fa18a3b3.slice - libcontainer container kubepods-besteffort-pod1f83444d_ee91_4b66_bbbc_fb67fa18a3b3.slice. Jan 15 00:31:34.079671 kubelet[2785]: I0115 00:31:34.079620 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1f83444d-ee91-4b66-bbbc-fb67fa18a3b3-kube-proxy\") pod \"kube-proxy-7kjkd\" (UID: \"1f83444d-ee91-4b66-bbbc-fb67fa18a3b3\") " pod="kube-system/kube-proxy-7kjkd" Jan 15 00:31:34.079671 kubelet[2785]: I0115 00:31:34.079674 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1f83444d-ee91-4b66-bbbc-fb67fa18a3b3-xtables-lock\") pod \"kube-proxy-7kjkd\" (UID: \"1f83444d-ee91-4b66-bbbc-fb67fa18a3b3\") " pod="kube-system/kube-proxy-7kjkd" Jan 15 00:31:34.079981 kubelet[2785]: I0115 00:31:34.079704 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1f83444d-ee91-4b66-bbbc-fb67fa18a3b3-lib-modules\") pod \"kube-proxy-7kjkd\" (UID: \"1f83444d-ee91-4b66-bbbc-fb67fa18a3b3\") " pod="kube-system/kube-proxy-7kjkd" Jan 15 00:31:34.079981 kubelet[2785]: I0115 00:31:34.079734 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knlnx\" (UniqueName: \"kubernetes.io/projected/1f83444d-ee91-4b66-bbbc-fb67fa18a3b3-kube-api-access-knlnx\") pod \"kube-proxy-7kjkd\" (UID: \"1f83444d-ee91-4b66-bbbc-fb67fa18a3b3\") " pod="kube-system/kube-proxy-7kjkd" Jan 15 00:31:34.190672 kubelet[2785]: E0115 00:31:34.190550 2785 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 15 00:31:34.190672 kubelet[2785]: E0115 00:31:34.190608 2785 projected.go:194] Error preparing data for projected volume kube-api-access-knlnx for pod kube-system/kube-proxy-7kjkd: configmap "kube-root-ca.crt" not found Jan 15 00:31:34.190990 kubelet[2785]: E0115 00:31:34.190951 2785 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1f83444d-ee91-4b66-bbbc-fb67fa18a3b3-kube-api-access-knlnx podName:1f83444d-ee91-4b66-bbbc-fb67fa18a3b3 nodeName:}" failed. No retries permitted until 2026-01-15 00:31:34.690921001 +0000 UTC m=+5.907184259 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-knlnx" (UniqueName: "kubernetes.io/projected/1f83444d-ee91-4b66-bbbc-fb67fa18a3b3-kube-api-access-knlnx") pod "kube-proxy-7kjkd" (UID: "1f83444d-ee91-4b66-bbbc-fb67fa18a3b3") : configmap "kube-root-ca.crt" not found Jan 15 00:31:34.532171 systemd[1]: Created slice kubepods-besteffort-poda5f21ca3_0dda_4af3_9a10_1a5466b81f55.slice - libcontainer container kubepods-besteffort-poda5f21ca3_0dda_4af3_9a10_1a5466b81f55.slice. Jan 15 00:31:34.583567 kubelet[2785]: I0115 00:31:34.583410 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdjvf\" (UniqueName: \"kubernetes.io/projected/a5f21ca3-0dda-4af3-9a10-1a5466b81f55-kube-api-access-cdjvf\") pod \"tigera-operator-7dcd859c48-v485p\" (UID: \"a5f21ca3-0dda-4af3-9a10-1a5466b81f55\") " pod="tigera-operator/tigera-operator-7dcd859c48-v485p" Jan 15 00:31:34.583567 kubelet[2785]: I0115 00:31:34.583474 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a5f21ca3-0dda-4af3-9a10-1a5466b81f55-var-lib-calico\") pod \"tigera-operator-7dcd859c48-v485p\" (UID: \"a5f21ca3-0dda-4af3-9a10-1a5466b81f55\") " pod="tigera-operator/tigera-operator-7dcd859c48-v485p" Jan 15 00:31:34.839973 containerd[1616]: time="2026-01-15T00:31:34.839768086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-v485p,Uid:a5f21ca3-0dda-4af3-9a10-1a5466b81f55,Namespace:tigera-operator,Attempt:0,}" Jan 15 00:31:34.863349 containerd[1616]: time="2026-01-15T00:31:34.863104145Z" level=info msg="connecting to shim 306144bd9b3e8c6b49b584b35b05a6dbaf00605a224caf24400645cdb2d98720" address="unix:///run/containerd/s/13de704d8ad91f0a845e6a53e2060bce55631f0b2c5ba7ffab1fb197c9152d55" namespace=k8s.io protocol=ttrpc version=3 Jan 15 00:31:34.904453 systemd[1]: Started cri-containerd-306144bd9b3e8c6b49b584b35b05a6dbaf00605a224caf24400645cdb2d98720.scope - libcontainer container 306144bd9b3e8c6b49b584b35b05a6dbaf00605a224caf24400645cdb2d98720. Jan 15 00:31:34.915033 kubelet[2785]: E0115 00:31:34.914963 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 15 00:31:34.925768 containerd[1616]: time="2026-01-15T00:31:34.923525939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7kjkd,Uid:1f83444d-ee91-4b66-bbbc-fb67fa18a3b3,Namespace:kube-system,Attempt:0,}" Jan 15 00:31:34.941076 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 15 00:31:34.941240 kernel: audit: type=1334 audit(1768437094.938:443): prog-id=133 op=LOAD Jan 15 00:31:34.938000 audit: BPF prog-id=133 op=LOAD Jan 15 00:31:34.941000 audit: BPF prog-id=134 op=LOAD Jan 15 00:31:34.944056 kernel: audit: type=1334 audit(1768437094.941:444): prog-id=134 op=LOAD Jan 15 00:31:34.941000 audit[2852]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2841 pid=2852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:34.941000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330363134346264396233653863366234396235383462333562303561 Jan 15 00:31:34.950141 kernel: audit: type=1300 audit(1768437094.941:444): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2841 pid=2852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:34.950233 kernel: audit: type=1327 audit(1768437094.941:444): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330363134346264396233653863366234396235383462333562303561 Jan 15 00:31:34.960049 kernel: audit: type=1334 audit(1768437094.941:445): prog-id=134 op=UNLOAD Jan 15 00:31:34.941000 audit: BPF prog-id=134 op=UNLOAD Jan 15 00:31:34.941000 audit[2852]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2841 pid=2852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:34.966060 kernel: audit: type=1300 audit(1768437094.941:445): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2841 pid=2852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:34.941000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330363134346264396233653863366234396235383462333562303561 Jan 15 00:31:34.972558 containerd[1616]: time="2026-01-15T00:31:34.972508582Z" level=info msg="connecting to shim 360d95338acdc51aca412c0429eaeacaad77f01b9fbe9706c193e4025812c603" address="unix:///run/containerd/s/e6cd87ac9cf63d0e66c23284624037873966a9ee1799bf8e49603987aadfe579" namespace=k8s.io protocol=ttrpc version=3 Jan 15 00:31:34.974084 kernel: audit: type=1327 audit(1768437094.941:445): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330363134346264396233653863366234396235383462333562303561 Jan 15 00:31:34.941000 audit: BPF prog-id=135 op=LOAD Jan 15 00:31:34.979056 kernel: audit: type=1334 audit(1768437094.941:446): prog-id=135 op=LOAD Jan 15 00:31:34.979250 kernel: audit: type=1300 audit(1768437094.941:446): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2841 pid=2852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:34.941000 audit[2852]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2841 pid=2852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:34.941000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330363134346264396233653863366234396235383462333562303561 Jan 15 00:31:34.993069 kernel: audit: type=1327 audit(1768437094.941:446): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330363134346264396233653863366234396235383462333562303561 Jan 15 00:31:34.941000 audit: BPF prog-id=136 op=LOAD Jan 15 00:31:34.941000 audit[2852]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2841 pid=2852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:34.941000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330363134346264396233653863366234396235383462333562303561 Jan 15 00:31:34.941000 audit: BPF prog-id=136 op=UNLOAD Jan 15 00:31:34.941000 audit[2852]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2841 pid=2852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:34.941000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330363134346264396233653863366234396235383462333562303561 Jan 15 00:31:34.941000 audit: BPF prog-id=135 op=UNLOAD Jan 15 00:31:34.941000 audit[2852]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2841 pid=2852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:34.941000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330363134346264396233653863366234396235383462333562303561 Jan 15 00:31:34.941000 audit: BPF prog-id=137 op=LOAD Jan 15 00:31:34.941000 audit[2852]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2841 pid=2852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:34.941000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330363134346264396233653863366234396235383462333562303561 Jan 15 00:31:35.017434 systemd[1]: Started cri-containerd-360d95338acdc51aca412c0429eaeacaad77f01b9fbe9706c193e4025812c603.scope - libcontainer container 360d95338acdc51aca412c0429eaeacaad77f01b9fbe9706c193e4025812c603. Jan 15 00:31:35.035000 audit: BPF prog-id=138 op=LOAD Jan 15 00:31:35.037111 containerd[1616]: time="2026-01-15T00:31:35.035606097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-v485p,Uid:a5f21ca3-0dda-4af3-9a10-1a5466b81f55,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"306144bd9b3e8c6b49b584b35b05a6dbaf00605a224caf24400645cdb2d98720\"" Jan 15 00:31:35.036000 audit: BPF prog-id=139 op=LOAD Jan 15 00:31:35.036000 audit[2894]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=2881 pid=2894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:35.036000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336306439353333386163646335316163613431326330343239656165 Jan 15 00:31:35.036000 audit: BPF prog-id=139 op=UNLOAD Jan 15 00:31:35.036000 audit[2894]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2881 pid=2894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:35.036000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336306439353333386163646335316163613431326330343239656165 Jan 15 00:31:35.037000 audit: BPF prog-id=140 op=LOAD Jan 15 00:31:35.037000 audit[2894]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=2881 pid=2894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:35.037000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336306439353333386163646335316163613431326330343239656165 Jan 15 00:31:35.037000 audit: BPF prog-id=141 op=LOAD Jan 15 00:31:35.037000 audit[2894]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=2881 pid=2894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:35.037000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336306439353333386163646335316163613431326330343239656165 Jan 15 00:31:35.037000 audit: BPF prog-id=141 op=UNLOAD Jan 15 00:31:35.037000 audit[2894]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2881 pid=2894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:35.037000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336306439353333386163646335316163613431326330343239656165 Jan 15 00:31:35.037000 audit: BPF prog-id=140 op=UNLOAD Jan 15 00:31:35.037000 audit[2894]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2881 pid=2894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:35.037000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336306439353333386163646335316163613431326330343239656165 Jan 15 00:31:35.037000 audit: BPF prog-id=142 op=LOAD Jan 15 00:31:35.037000 audit[2894]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=2881 pid=2894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:35.037000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336306439353333386163646335316163613431326330343239656165 Jan 15 00:31:35.047216 containerd[1616]: time="2026-01-15T00:31:35.046808573Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 15 00:31:35.053927 systemd-resolved[1291]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Jan 15 00:31:35.065588 containerd[1616]: time="2026-01-15T00:31:35.065538419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7kjkd,Uid:1f83444d-ee91-4b66-bbbc-fb67fa18a3b3,Namespace:kube-system,Attempt:0,} returns sandbox id \"360d95338acdc51aca412c0429eaeacaad77f01b9fbe9706c193e4025812c603\"" Jan 15 00:31:35.066635 kubelet[2785]: E0115 00:31:35.066606 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 15 00:31:35.072691 containerd[1616]: time="2026-01-15T00:31:35.072427854Z" level=info msg="CreateContainer within sandbox \"360d95338acdc51aca412c0429eaeacaad77f01b9fbe9706c193e4025812c603\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 15 00:31:35.090682 containerd[1616]: time="2026-01-15T00:31:35.089096986Z" level=info msg="Container f8813599635979408a5acc78d5392bad6b1d852fa3554f1cef4c6f852ee30fa0: CDI devices from CRI Config.CDIDevices: []" Jan 15 00:31:35.099570 containerd[1616]: time="2026-01-15T00:31:35.099506610Z" level=info msg="CreateContainer within sandbox \"360d95338acdc51aca412c0429eaeacaad77f01b9fbe9706c193e4025812c603\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f8813599635979408a5acc78d5392bad6b1d852fa3554f1cef4c6f852ee30fa0\"" Jan 15 00:31:35.101737 containerd[1616]: time="2026-01-15T00:31:35.101685117Z" level=info msg="StartContainer for \"f8813599635979408a5acc78d5392bad6b1d852fa3554f1cef4c6f852ee30fa0\"" Jan 15 00:31:35.105049 containerd[1616]: time="2026-01-15T00:31:35.104928613Z" level=info msg="connecting to shim f8813599635979408a5acc78d5392bad6b1d852fa3554f1cef4c6f852ee30fa0" address="unix:///run/containerd/s/e6cd87ac9cf63d0e66c23284624037873966a9ee1799bf8e49603987aadfe579" protocol=ttrpc version=3 Jan 15 00:31:35.133537 systemd[1]: Started cri-containerd-f8813599635979408a5acc78d5392bad6b1d852fa3554f1cef4c6f852ee30fa0.scope - libcontainer container f8813599635979408a5acc78d5392bad6b1d852fa3554f1cef4c6f852ee30fa0. Jan 15 00:31:35.197000 audit: BPF prog-id=143 op=LOAD Jan 15 00:31:35.197000 audit[2925]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=2881 pid=2925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:35.197000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638383133353939363335393739343038613561636337386435333932 Jan 15 00:31:35.198000 audit: BPF prog-id=144 op=LOAD Jan 15 00:31:35.198000 audit[2925]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=2881 pid=2925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:35.198000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638383133353939363335393739343038613561636337386435333932 Jan 15 00:31:35.198000 audit: BPF prog-id=144 op=UNLOAD Jan 15 00:31:35.198000 audit[2925]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2881 pid=2925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:35.198000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638383133353939363335393739343038613561636337386435333932 Jan 15 00:31:35.198000 audit: BPF prog-id=143 op=UNLOAD Jan 15 00:31:35.198000 audit[2925]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2881 pid=2925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:35.198000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638383133353939363335393739343038613561636337386435333932 Jan 15 00:31:35.198000 audit: BPF prog-id=145 op=LOAD Jan 15 00:31:35.198000 audit[2925]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=2881 pid=2925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:35.198000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638383133353939363335393739343038613561636337386435333932 Jan 15 00:31:35.228077 containerd[1616]: time="2026-01-15T00:31:35.228009554Z" level=info msg="StartContainer for \"f8813599635979408a5acc78d5392bad6b1d852fa3554f1cef4c6f852ee30fa0\" returns successfully" Jan 15 00:31:35.464000 audit[2988]: NETFILTER_CFG table=mangle:54 family=10 entries=1 op=nft_register_chain pid=2988 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:31:35.464000 audit[2988]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff1b406130 a2=0 a3=7fff1b40611c items=0 ppid=2937 pid=2988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:35.464000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 15 00:31:35.465000 audit[2987]: NETFILTER_CFG table=mangle:55 family=2 entries=1 op=nft_register_chain pid=2987 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:31:35.465000 audit[2987]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcb150e6a0 a2=0 a3=7ffcb150e68c items=0 ppid=2937 pid=2987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:35.465000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 15 00:31:35.467000 audit[2989]: NETFILTER_CFG table=nat:56 family=10 entries=1 op=nft_register_chain pid=2989 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:31:35.467000 audit[2989]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeb8834230 a2=0 a3=7ffeb883421c items=0 ppid=2937 pid=2989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:35.467000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 15 00:31:35.470000 audit[2992]: NETFILTER_CFG table=filter:57 family=10 entries=1 op=nft_register_chain pid=2992 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:31:35.470000 audit[2992]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffebe3a9710 a2=0 a3=7ffebe3a96fc items=0 ppid=2937 pid=2992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:35.470000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 15 00:31:35.471000 audit[2991]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2991 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:31:35.471000 audit[2991]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc42076740 a2=0 a3=7ffc4207672c items=0 ppid=2937 pid=2991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:35.471000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 15 00:31:35.478000 audit[2993]: NETFILTER_CFG table=filter:59 family=2 entries=1 op=nft_register_chain pid=2993 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:31:35.478000 audit[2993]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd0df44e20 a2=0 a3=7ffd0df44e0c items=0 ppid=2937 pid=2993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:35.478000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 15 00:31:35.574000 audit[2994]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=2994 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:31:35.574000 audit[2994]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd48531fb0 a2=0 a3=7ffd48531f9c items=0 ppid=2937 pid=2994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:35.574000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 15 00:31:35.585000 audit[2996]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=2996 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:31:35.585000 audit[2996]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffecfcfbbb0 a2=0 a3=7ffecfcfbb9c items=0 ppid=2937 pid=2996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:35.585000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 15 00:31:35.591000 audit[2999]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=2999 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:31:35.591000 audit[2999]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff752ea860 a2=0 a3=7fff752ea84c items=0 ppid=2937 pid=2999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:35.591000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 15 00:31:35.593000 audit[3000]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3000 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:31:35.593000 audit[3000]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd10bcb350 a2=0 a3=7ffd10bcb33c items=0 ppid=2937 pid=3000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:35.593000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 15 00:31:35.596000 audit[3002]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3002 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:31:35.596000 audit[3002]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd170dc7b0 a2=0 a3=7ffd170dc79c items=0 ppid=2937 pid=3002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:35.596000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 15 00:31:35.599000 audit[3003]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3003 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:31:35.599000 audit[3003]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd0af96e00 a2=0 a3=7ffd0af96dec items=0 ppid=2937 pid=3003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:35.599000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 15 00:31:35.603000 audit[3005]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3005 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:31:35.603000 audit[3005]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffcd9cae9c0 a2=0 a3=7ffcd9cae9ac items=0 ppid=2937 pid=3005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:35.603000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 15 00:31:35.615000 audit[3008]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3008 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:31:35.615000 audit[3008]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffcdfc5e2e0 a2=0 a3=7ffcdfc5e2cc items=0 ppid=2937 pid=3008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:35.615000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 15 00:31:35.618000 audit[3009]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3009 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:31:35.618000 audit[3009]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd4b52ebd0 a2=0 a3=7ffd4b52ebbc items=0 ppid=2937 pid=3009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:35.618000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 15 00:31:35.622000 audit[3011]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3011 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:31:35.622000 audit[3011]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcc60f53f0 a2=0 a3=7ffcc60f53dc items=0 ppid=2937 pid=3011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:35.622000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 15 00:31:35.624000 audit[3012]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3012 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:31:35.624000 audit[3012]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc13be50f0 a2=0 a3=7ffc13be50dc items=0 ppid=2937 pid=3012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:35.624000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 15 00:31:35.629000 audit[3014]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3014 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:31:35.629000 audit[3014]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc8d1545e0 a2=0 a3=7ffc8d1545cc items=0 ppid=2937 pid=3014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:35.629000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 15 00:31:35.637000 audit[3017]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3017 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:31:35.637000 audit[3017]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffee2b82b00 a2=0 a3=7ffee2b82aec items=0 ppid=2937 pid=3017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:35.637000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 15 00:31:35.643000 audit[3020]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3020 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:31:35.643000 audit[3020]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff0ed24fb0 a2=0 a3=7fff0ed24f9c items=0 ppid=2937 pid=3020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:35.643000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 15 00:31:35.645000 audit[3021]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3021 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:31:35.645000 audit[3021]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd9dccfc90 a2=0 a3=7ffd9dccfc7c items=0 ppid=2937 pid=3021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:35.645000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 15 00:31:35.649000 audit[3023]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3023 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:31:35.649000 audit[3023]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffe52a60940 a2=0 a3=7ffe52a6092c items=0 ppid=2937 pid=3023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:35.649000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 15 00:31:35.654000 audit[3026]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3026 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:31:35.654000 audit[3026]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe31ef5b00 a2=0 a3=7ffe31ef5aec items=0 ppid=2937 pid=3026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:35.654000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 15 00:31:35.656000 audit[3027]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3027 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:31:35.656000 audit[3027]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff743c9310 a2=0 a3=7fff743c92fc items=0 ppid=2937 pid=3027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:35.656000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 15 00:31:35.659000 audit[3029]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3029 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:31:35.659000 audit[3029]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7fff134a7be0 a2=0 a3=7fff134a7bcc items=0 ppid=2937 pid=3029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:35.659000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 15 00:31:35.688000 audit[3035]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3035 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:31:35.688000 audit[3035]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc4172c5c0 a2=0 a3=7ffc4172c5ac items=0 ppid=2937 pid=3035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:35.688000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:31:35.705000 audit[3035]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3035 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:31:35.705000 audit[3035]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffc4172c5c0 a2=0 a3=7ffc4172c5ac items=0 ppid=2937 pid=3035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:35.705000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:31:35.707000 audit[3040]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3040 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:31:35.707000 audit[3040]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffc0e75fee0 a2=0 a3=7ffc0e75fecc items=0 ppid=2937 pid=3040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:35.707000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 15 00:31:35.712000 audit[3042]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3042 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:31:35.712000 audit[3042]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fffe54dc420 a2=0 a3=7fffe54dc40c items=0 ppid=2937 pid=3042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:35.712000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 15 00:31:35.718000 audit[3045]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3045 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:31:35.718000 audit[3045]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff923bc460 a2=0 a3=7fff923bc44c items=0 ppid=2937 pid=3045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:35.718000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 15 00:31:35.720000 audit[3046]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3046 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:31:35.720000 audit[3046]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff890e84e0 a2=0 a3=7fff890e84cc items=0 ppid=2937 pid=3046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:35.720000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 15 00:31:35.725000 audit[3048]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3048 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:31:35.725000 audit[3048]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffebb0e8af0 a2=0 a3=7ffebb0e8adc items=0 ppid=2937 pid=3048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:35.725000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 15 00:31:35.727000 audit[3049]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3049 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:31:35.727000 audit[3049]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff3ac73e80 a2=0 a3=7fff3ac73e6c items=0 ppid=2937 pid=3049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:35.727000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 15 00:31:35.731000 audit[3051]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3051 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:31:35.731000 audit[3051]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff03f49b20 a2=0 a3=7fff03f49b0c items=0 ppid=2937 pid=3051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:35.731000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 15 00:31:35.742000 audit[3054]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3054 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:31:35.742000 audit[3054]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7fff9ad84f10 a2=0 a3=7fff9ad84efc items=0 ppid=2937 pid=3054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:35.742000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 15 00:31:35.745000 audit[3055]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3055 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:31:35.745000 audit[3055]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd73817470 a2=0 a3=7ffd7381745c items=0 ppid=2937 pid=3055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:35.745000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 15 00:31:35.749000 audit[3057]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3057 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:31:35.749000 audit[3057]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffca7616700 a2=0 a3=7ffca76166ec items=0 ppid=2937 pid=3057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:35.749000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 15 00:31:35.751000 audit[3058]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3058 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:31:35.751000 audit[3058]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcf16f2a20 a2=0 a3=7ffcf16f2a0c items=0 ppid=2937 pid=3058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:35.751000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 15 00:31:35.755000 audit[3060]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3060 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:31:35.755000 audit[3060]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd0e7e2ad0 a2=0 a3=7ffd0e7e2abc items=0 ppid=2937 pid=3060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:35.755000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 15 00:31:35.762000 audit[3063]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3063 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:31:35.762000 audit[3063]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe97e2b5f0 a2=0 a3=7ffe97e2b5dc items=0 ppid=2937 pid=3063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:35.762000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 15 00:31:35.768000 audit[3066]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3066 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:31:35.768000 audit[3066]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffefd85d00 a2=0 a3=7fffefd85cec items=0 ppid=2937 pid=3066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:35.768000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 15 00:31:35.770000 audit[3067]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3067 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:31:35.770000 audit[3067]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffcd286de00 a2=0 a3=7ffcd286ddec items=0 ppid=2937 pid=3067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:35.770000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 15 00:31:35.774000 audit[3069]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3069 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:31:35.774000 audit[3069]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffc6adff4f0 a2=0 a3=7ffc6adff4dc items=0 ppid=2937 pid=3069 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:35.774000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 15 00:31:35.780000 audit[3072]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3072 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:31:35.780000 audit[3072]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffca9035390 a2=0 a3=7ffca903537c items=0 ppid=2937 pid=3072 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:35.780000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 15 00:31:35.782000 audit[3073]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3073 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:31:35.782000 audit[3073]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc126f65d0 a2=0 a3=7ffc126f65bc items=0 ppid=2937 pid=3073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:35.782000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 15 00:31:35.786000 audit[3075]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3075 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:31:35.786000 audit[3075]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffe90ce0670 a2=0 a3=7ffe90ce065c items=0 ppid=2937 pid=3075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:35.786000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 15 00:31:35.787000 audit[3076]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3076 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:31:35.787000 audit[3076]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe53cf1350 a2=0 a3=7ffe53cf133c items=0 ppid=2937 pid=3076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:35.787000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 15 00:31:35.791000 audit[3078]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3078 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:31:35.791000 audit[3078]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffefc5b77a0 a2=0 a3=7ffefc5b778c items=0 ppid=2937 pid=3078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:35.791000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 15 00:31:35.797000 audit[3081]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3081 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:31:35.797000 audit[3081]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffcb8316fa0 a2=0 a3=7ffcb8316f8c items=0 ppid=2937 pid=3081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:35.797000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 15 00:31:35.802000 audit[3083]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3083 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 15 00:31:35.802000 audit[3083]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffeaddce370 a2=0 a3=7ffeaddce35c items=0 ppid=2937 pid=3083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:35.802000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:31:35.803000 audit[3083]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3083 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 15 00:31:35.803000 audit[3083]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffeaddce370 a2=0 a3=7ffeaddce35c items=0 ppid=2937 pid=3083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:35.803000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:31:35.976839 kubelet[2785]: E0115 00:31:35.975158 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 15 00:31:36.038880 kubelet[2785]: E0115 00:31:36.038805 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 15 00:31:36.040737 kubelet[2785]: E0115 00:31:36.040056 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 15 00:31:36.088380 kubelet[2785]: I0115 00:31:36.088174 2785 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7kjkd" podStartSLOduration=3.088153671 podStartE2EDuration="3.088153671s" podCreationTimestamp="2026-01-15 00:31:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-15 00:31:36.057638703 +0000 UTC m=+7.273901971" watchObservedRunningTime="2026-01-15 00:31:36.088153671 +0000 UTC m=+7.304416939" Jan 15 00:31:37.042689 kubelet[2785]: E0115 00:31:37.042300 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 15 00:31:37.241697 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4168747862.mount: Deactivated successfully. Jan 15 00:31:37.483201 kubelet[2785]: E0115 00:31:37.482778 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 15 00:31:38.044786 kubelet[2785]: E0115 00:31:38.044596 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 15 00:31:38.379934 update_engine[1593]: I20260115 00:31:38.379748 1593 update_attempter.cc:509] Updating boot flags... Jan 15 00:31:39.050048 kubelet[2785]: E0115 00:31:39.049936 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 15 00:31:39.693111 containerd[1616]: time="2026-01-15T00:31:39.692873574Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:31:39.694273 containerd[1616]: time="2026-01-15T00:31:39.693892091Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25052948" Jan 15 00:31:39.694568 containerd[1616]: time="2026-01-15T00:31:39.694539894Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:31:39.696454 containerd[1616]: time="2026-01-15T00:31:39.696393125Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:31:39.697677 containerd[1616]: time="2026-01-15T00:31:39.697640516Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 4.65078806s" Jan 15 00:31:39.697817 containerd[1616]: time="2026-01-15T00:31:39.697799246Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 15 00:31:39.702172 containerd[1616]: time="2026-01-15T00:31:39.701659853Z" level=info msg="CreateContainer within sandbox \"306144bd9b3e8c6b49b584b35b05a6dbaf00605a224caf24400645cdb2d98720\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 15 00:31:39.710049 containerd[1616]: time="2026-01-15T00:31:39.709787845Z" level=info msg="Container d831422da5ceaee25410a0973b73f24795d5cb229dedc7c4aa294717cc1fadd0: CDI devices from CRI Config.CDIDevices: []" Jan 15 00:31:39.713106 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2163090268.mount: Deactivated successfully. Jan 15 00:31:39.719233 containerd[1616]: time="2026-01-15T00:31:39.719179343Z" level=info msg="CreateContainer within sandbox \"306144bd9b3e8c6b49b584b35b05a6dbaf00605a224caf24400645cdb2d98720\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"d831422da5ceaee25410a0973b73f24795d5cb229dedc7c4aa294717cc1fadd0\"" Jan 15 00:31:39.723255 containerd[1616]: time="2026-01-15T00:31:39.723205221Z" level=info msg="StartContainer for \"d831422da5ceaee25410a0973b73f24795d5cb229dedc7c4aa294717cc1fadd0\"" Jan 15 00:31:39.724691 containerd[1616]: time="2026-01-15T00:31:39.724559606Z" level=info msg="connecting to shim d831422da5ceaee25410a0973b73f24795d5cb229dedc7c4aa294717cc1fadd0" address="unix:///run/containerd/s/13de704d8ad91f0a845e6a53e2060bce55631f0b2c5ba7ffab1fb197c9152d55" protocol=ttrpc version=3 Jan 15 00:31:39.755350 systemd[1]: Started cri-containerd-d831422da5ceaee25410a0973b73f24795d5cb229dedc7c4aa294717cc1fadd0.scope - libcontainer container d831422da5ceaee25410a0973b73f24795d5cb229dedc7c4aa294717cc1fadd0. Jan 15 00:31:39.771000 audit: BPF prog-id=146 op=LOAD Jan 15 00:31:39.771000 audit: BPF prog-id=147 op=LOAD Jan 15 00:31:39.771000 audit[3107]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2841 pid=3107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:39.771000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438333134323264613563656165653235343130613039373362373366 Jan 15 00:31:39.771000 audit: BPF prog-id=147 op=UNLOAD Jan 15 00:31:39.771000 audit[3107]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2841 pid=3107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:39.771000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438333134323264613563656165653235343130613039373362373366 Jan 15 00:31:39.772000 audit: BPF prog-id=148 op=LOAD Jan 15 00:31:39.772000 audit[3107]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2841 pid=3107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:39.772000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438333134323264613563656165653235343130613039373362373366 Jan 15 00:31:39.772000 audit: BPF prog-id=149 op=LOAD Jan 15 00:31:39.772000 audit[3107]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2841 pid=3107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:39.772000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438333134323264613563656165653235343130613039373362373366 Jan 15 00:31:39.772000 audit: BPF prog-id=149 op=UNLOAD Jan 15 00:31:39.772000 audit[3107]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2841 pid=3107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:39.772000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438333134323264613563656165653235343130613039373362373366 Jan 15 00:31:39.772000 audit: BPF prog-id=148 op=UNLOAD Jan 15 00:31:39.772000 audit[3107]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2841 pid=3107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:39.772000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438333134323264613563656165653235343130613039373362373366 Jan 15 00:31:39.772000 audit: BPF prog-id=150 op=LOAD Jan 15 00:31:39.772000 audit[3107]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2841 pid=3107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:39.772000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438333134323264613563656165653235343130613039373362373366 Jan 15 00:31:39.795616 containerd[1616]: time="2026-01-15T00:31:39.795483015Z" level=info msg="StartContainer for \"d831422da5ceaee25410a0973b73f24795d5cb229dedc7c4aa294717cc1fadd0\" returns successfully" Jan 15 00:31:40.071558 kubelet[2785]: I0115 00:31:40.071464 2785 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-v485p" podStartSLOduration=1.415339469 podStartE2EDuration="6.071441197s" podCreationTimestamp="2026-01-15 00:31:34 +0000 UTC" firstStartedPulling="2026-01-15 00:31:35.042718343 +0000 UTC m=+6.258981606" lastFinishedPulling="2026-01-15 00:31:39.698820075 +0000 UTC m=+10.915083334" observedRunningTime="2026-01-15 00:31:40.071124034 +0000 UTC m=+11.287387301" watchObservedRunningTime="2026-01-15 00:31:40.071441197 +0000 UTC m=+11.287704463" Jan 15 00:31:41.801183 kubelet[2785]: E0115 00:31:41.801137 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 15 00:31:46.551658 sudo[1837]: pam_unix(sudo:session): session closed for user root Jan 15 00:31:46.558791 kernel: kauditd_printk_skb: 224 callbacks suppressed Jan 15 00:31:46.558974 kernel: audit: type=1106 audit(1768437106.550:523): pid=1837 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 15 00:31:46.550000 audit[1837]: USER_END pid=1837 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 15 00:31:46.550000 audit[1837]: CRED_DISP pid=1837 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 15 00:31:46.566076 kernel: audit: type=1104 audit(1768437106.550:524): pid=1837 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 15 00:31:46.622755 sshd[1836]: Connection closed by 20.161.92.111 port 47006 Jan 15 00:31:46.624178 sshd-session[1833]: pam_unix(sshd:session): session closed for user core Jan 15 00:31:46.625000 audit[1833]: USER_END pid=1833 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:31:46.632497 kernel: audit: type=1106 audit(1768437106.625:525): pid=1833 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:31:46.625000 audit[1833]: CRED_DISP pid=1833 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:31:46.642399 kernel: audit: type=1104 audit(1768437106.625:526): pid=1833 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:31:46.641458 systemd[1]: sshd@6-164.92.64.55:22-20.161.92.111:47006.service: Deactivated successfully. Jan 15 00:31:46.642215 systemd-logind[1592]: Session 7 logged out. Waiting for processes to exit. Jan 15 00:31:46.642000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-164.92.64.55:22-20.161.92.111:47006 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:31:46.650145 kernel: audit: type=1131 audit(1768437106.642:527): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-164.92.64.55:22-20.161.92.111:47006 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:31:46.650949 systemd[1]: session-7.scope: Deactivated successfully. Jan 15 00:31:46.652765 systemd[1]: session-7.scope: Consumed 6.006s CPU time, 151.2M memory peak. Jan 15 00:31:46.660767 systemd-logind[1592]: Removed session 7. Jan 15 00:31:47.326000 audit[3183]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3183 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:31:47.331048 kernel: audit: type=1325 audit(1768437107.326:528): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3183 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:31:47.337057 kernel: audit: type=1300 audit(1768437107.326:528): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffc157b1ea0 a2=0 a3=7ffc157b1e8c items=0 ppid=2937 pid=3183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:47.326000 audit[3183]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffc157b1ea0 a2=0 a3=7ffc157b1e8c items=0 ppid=2937 pid=3183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:47.326000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:31:47.342046 kernel: audit: type=1327 audit(1768437107.326:528): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:31:47.331000 audit[3183]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3183 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:31:47.351286 kernel: audit: type=1325 audit(1768437107.331:529): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3183 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:31:47.351454 kernel: audit: type=1300 audit(1768437107.331:529): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc157b1ea0 a2=0 a3=0 items=0 ppid=2937 pid=3183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:47.331000 audit[3183]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc157b1ea0 a2=0 a3=0 items=0 ppid=2937 pid=3183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:47.331000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:31:47.364000 audit[3185]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3185 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:31:47.364000 audit[3185]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffe69eab870 a2=0 a3=7ffe69eab85c items=0 ppid=2937 pid=3185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:47.364000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:31:47.376000 audit[3185]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3185 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:31:47.376000 audit[3185]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe69eab870 a2=0 a3=0 items=0 ppid=2937 pid=3185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:47.376000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:31:50.214000 audit[3189]: NETFILTER_CFG table=filter:109 family=2 entries=16 op=nft_register_rule pid=3189 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:31:50.214000 audit[3189]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffd8cb98bb0 a2=0 a3=7ffd8cb98b9c items=0 ppid=2937 pid=3189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:50.214000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:31:50.219000 audit[3189]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3189 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:31:50.219000 audit[3189]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd8cb98bb0 a2=0 a3=0 items=0 ppid=2937 pid=3189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:50.219000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:31:50.242000 audit[3191]: NETFILTER_CFG table=filter:111 family=2 entries=17 op=nft_register_rule pid=3191 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:31:50.242000 audit[3191]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffe28c6bb40 a2=0 a3=7ffe28c6bb2c items=0 ppid=2937 pid=3191 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:50.242000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:31:50.246000 audit[3191]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3191 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:31:50.246000 audit[3191]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe28c6bb40 a2=0 a3=0 items=0 ppid=2937 pid=3191 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:50.246000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:31:51.262000 audit[3193]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3193 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:31:51.262000 audit[3193]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff55223760 a2=0 a3=7fff5522374c items=0 ppid=2937 pid=3193 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:51.262000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:31:51.266000 audit[3193]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3193 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:31:51.266000 audit[3193]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff55223760 a2=0 a3=0 items=0 ppid=2937 pid=3193 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:51.266000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:31:52.775705 kernel: kauditd_printk_skb: 25 callbacks suppressed Jan 15 00:31:52.775942 kernel: audit: type=1325 audit(1768437112.769:538): table=filter:115 family=2 entries=21 op=nft_register_rule pid=3195 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:31:52.769000 audit[3195]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3195 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:31:52.792067 kubelet[2785]: W0115 00:31:52.791976 2785 reflector.go:569] object-"calico-system"/"typha-certs": failed to list *v1.Secret: secrets "typha-certs" is forbidden: User "system:node:ci-4515.1.0-n-4ecc98c3fd" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4515.1.0-n-4ecc98c3fd' and this object Jan 15 00:31:52.792882 kubelet[2785]: E0115 00:31:52.792309 2785 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"typha-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"typha-certs\" is forbidden: User \"system:node:ci-4515.1.0-n-4ecc98c3fd\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4515.1.0-n-4ecc98c3fd' and this object" logger="UnhandledError" Jan 15 00:31:52.793301 systemd[1]: Created slice kubepods-besteffort-poda0355938_8a35_401b_ad9e_0f9ebdbf44ef.slice - libcontainer container kubepods-besteffort-poda0355938_8a35_401b_ad9e_0f9ebdbf44ef.slice. Jan 15 00:31:52.794532 kubelet[2785]: I0115 00:31:52.794481 2785 status_manager.go:890] "Failed to get status for pod" podUID="a0355938-8a35-401b-ad9e-0f9ebdbf44ef" pod="calico-system/calico-typha-784d8b8c59-hvvck" err="pods \"calico-typha-784d8b8c59-hvvck\" is forbidden: User \"system:node:ci-4515.1.0-n-4ecc98c3fd\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4515.1.0-n-4ecc98c3fd' and this object" Jan 15 00:31:52.794803 kubelet[2785]: W0115 00:31:52.794723 2785 reflector.go:569] object-"calico-system"/"tigera-ca-bundle": failed to list *v1.ConfigMap: configmaps "tigera-ca-bundle" is forbidden: User "system:node:ci-4515.1.0-n-4ecc98c3fd" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4515.1.0-n-4ecc98c3fd' and this object Jan 15 00:31:52.794803 kubelet[2785]: E0115 00:31:52.794774 2785 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"tigera-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"tigera-ca-bundle\" is forbidden: User \"system:node:ci-4515.1.0-n-4ecc98c3fd\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4515.1.0-n-4ecc98c3fd' and this object" logger="UnhandledError" Jan 15 00:31:52.796446 kubelet[2785]: W0115 00:31:52.796249 2785 reflector.go:569] object-"calico-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4515.1.0-n-4ecc98c3fd" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4515.1.0-n-4ecc98c3fd' and this object Jan 15 00:31:52.796800 kubelet[2785]: E0115 00:31:52.796687 2785 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4515.1.0-n-4ecc98c3fd\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4515.1.0-n-4ecc98c3fd' and this object" logger="UnhandledError" Jan 15 00:31:52.769000 audit[3195]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffd9788f080 a2=0 a3=7ffd9788f06c items=0 ppid=2937 pid=3195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:52.805060 kernel: audit: type=1300 audit(1768437112.769:538): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffd9788f080 a2=0 a3=7ffd9788f06c items=0 ppid=2937 pid=3195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:52.806411 kubelet[2785]: I0115 00:31:52.806371 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a0355938-8a35-401b-ad9e-0f9ebdbf44ef-typha-certs\") pod \"calico-typha-784d8b8c59-hvvck\" (UID: \"a0355938-8a35-401b-ad9e-0f9ebdbf44ef\") " pod="calico-system/calico-typha-784d8b8c59-hvvck" Jan 15 00:31:52.806629 kubelet[2785]: I0115 00:31:52.806441 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a0355938-8a35-401b-ad9e-0f9ebdbf44ef-tigera-ca-bundle\") pod \"calico-typha-784d8b8c59-hvvck\" (UID: \"a0355938-8a35-401b-ad9e-0f9ebdbf44ef\") " pod="calico-system/calico-typha-784d8b8c59-hvvck" Jan 15 00:31:52.806629 kubelet[2785]: I0115 00:31:52.806462 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrvcs\" (UniqueName: \"kubernetes.io/projected/a0355938-8a35-401b-ad9e-0f9ebdbf44ef-kube-api-access-lrvcs\") pod \"calico-typha-784d8b8c59-hvvck\" (UID: \"a0355938-8a35-401b-ad9e-0f9ebdbf44ef\") " pod="calico-system/calico-typha-784d8b8c59-hvvck" Jan 15 00:31:52.769000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:31:52.812055 kernel: audit: type=1327 audit(1768437112.769:538): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:31:52.804000 audit[3195]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3195 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:31:52.816121 kernel: audit: type=1325 audit(1768437112.804:539): table=nat:116 family=2 entries=12 op=nft_register_rule pid=3195 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:31:52.804000 audit[3195]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd9788f080 a2=0 a3=0 items=0 ppid=2937 pid=3195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:52.827052 kernel: audit: type=1300 audit(1768437112.804:539): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd9788f080 a2=0 a3=0 items=0 ppid=2937 pid=3195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:52.804000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:31:52.832058 kernel: audit: type=1327 audit(1768437112.804:539): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:31:53.009071 kubelet[2785]: I0115 00:31:53.007835 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5ff1b46a-31bd-4fa1-a3bb-58df39be5f06-xtables-lock\") pod \"calico-node-mq72d\" (UID: \"5ff1b46a-31bd-4fa1-a3bb-58df39be5f06\") " pod="calico-system/calico-node-mq72d" Jan 15 00:31:53.009071 kubelet[2785]: I0115 00:31:53.007897 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xh22\" (UniqueName: \"kubernetes.io/projected/5ff1b46a-31bd-4fa1-a3bb-58df39be5f06-kube-api-access-6xh22\") pod \"calico-node-mq72d\" (UID: \"5ff1b46a-31bd-4fa1-a3bb-58df39be5f06\") " pod="calico-system/calico-node-mq72d" Jan 15 00:31:53.009071 kubelet[2785]: I0115 00:31:53.007980 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/5ff1b46a-31bd-4fa1-a3bb-58df39be5f06-cni-net-dir\") pod \"calico-node-mq72d\" (UID: \"5ff1b46a-31bd-4fa1-a3bb-58df39be5f06\") " pod="calico-system/calico-node-mq72d" Jan 15 00:31:53.009489 kubelet[2785]: I0115 00:31:53.008008 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/5ff1b46a-31bd-4fa1-a3bb-58df39be5f06-flexvol-driver-host\") pod \"calico-node-mq72d\" (UID: \"5ff1b46a-31bd-4fa1-a3bb-58df39be5f06\") " pod="calico-system/calico-node-mq72d" Jan 15 00:31:53.009723 kubelet[2785]: I0115 00:31:53.009469 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5ff1b46a-31bd-4fa1-a3bb-58df39be5f06-lib-modules\") pod \"calico-node-mq72d\" (UID: \"5ff1b46a-31bd-4fa1-a3bb-58df39be5f06\") " pod="calico-system/calico-node-mq72d" Jan 15 00:31:53.010031 kubelet[2785]: I0115 00:31:53.009903 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/5ff1b46a-31bd-4fa1-a3bb-58df39be5f06-policysync\") pod \"calico-node-mq72d\" (UID: \"5ff1b46a-31bd-4fa1-a3bb-58df39be5f06\") " pod="calico-system/calico-node-mq72d" Jan 15 00:31:53.010031 kubelet[2785]: I0115 00:31:53.009983 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ff1b46a-31bd-4fa1-a3bb-58df39be5f06-tigera-ca-bundle\") pod \"calico-node-mq72d\" (UID: \"5ff1b46a-31bd-4fa1-a3bb-58df39be5f06\") " pod="calico-system/calico-node-mq72d" Jan 15 00:31:53.011039 kubelet[2785]: I0115 00:31:53.010197 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/5ff1b46a-31bd-4fa1-a3bb-58df39be5f06-var-run-calico\") pod \"calico-node-mq72d\" (UID: \"5ff1b46a-31bd-4fa1-a3bb-58df39be5f06\") " pod="calico-system/calico-node-mq72d" Jan 15 00:31:53.011039 kubelet[2785]: I0115 00:31:53.010234 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/5ff1b46a-31bd-4fa1-a3bb-58df39be5f06-cni-bin-dir\") pod \"calico-node-mq72d\" (UID: \"5ff1b46a-31bd-4fa1-a3bb-58df39be5f06\") " pod="calico-system/calico-node-mq72d" Jan 15 00:31:53.011039 kubelet[2785]: I0115 00:31:53.010260 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/5ff1b46a-31bd-4fa1-a3bb-58df39be5f06-node-certs\") pod \"calico-node-mq72d\" (UID: \"5ff1b46a-31bd-4fa1-a3bb-58df39be5f06\") " pod="calico-system/calico-node-mq72d" Jan 15 00:31:53.011039 kubelet[2785]: I0115 00:31:53.010283 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5ff1b46a-31bd-4fa1-a3bb-58df39be5f06-var-lib-calico\") pod \"calico-node-mq72d\" (UID: \"5ff1b46a-31bd-4fa1-a3bb-58df39be5f06\") " pod="calico-system/calico-node-mq72d" Jan 15 00:31:53.011039 kubelet[2785]: I0115 00:31:53.010309 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/5ff1b46a-31bd-4fa1-a3bb-58df39be5f06-cni-log-dir\") pod \"calico-node-mq72d\" (UID: \"5ff1b46a-31bd-4fa1-a3bb-58df39be5f06\") " pod="calico-system/calico-node-mq72d" Jan 15 00:31:53.010605 systemd[1]: Created slice kubepods-besteffort-pod5ff1b46a_31bd_4fa1_a3bb_58df39be5f06.slice - libcontainer container kubepods-besteffort-pod5ff1b46a_31bd_4fa1_a3bb_58df39be5f06.slice. Jan 15 00:31:53.122000 kubelet[2785]: E0115 00:31:53.121878 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:31:53.123320 kubelet[2785]: W0115 00:31:53.123284 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:31:53.124925 kubelet[2785]: E0115 00:31:53.124366 2785 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:31:53.178267 kubelet[2785]: E0115 00:31:53.177989 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rjlcz" podUID="14ced92d-cf89-41f0-99bf-edc9c92a737b" Jan 15 00:31:53.182169 kubelet[2785]: I0115 00:31:53.182110 2785 status_manager.go:890] "Failed to get status for pod" podUID="14ced92d-cf89-41f0-99bf-edc9c92a737b" pod="calico-system/csi-node-driver-rjlcz" err="pods \"csi-node-driver-rjlcz\" is forbidden: User \"system:node:ci-4515.1.0-n-4ecc98c3fd\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4515.1.0-n-4ecc98c3fd' and this object" Jan 15 00:31:53.209069 kubelet[2785]: E0115 00:31:53.208880 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:31:53.209069 kubelet[2785]: W0115 00:31:53.208914 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:31:53.209069 kubelet[2785]: E0115 00:31:53.208940 2785 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:31:53.209357 kubelet[2785]: E0115 00:31:53.209173 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:31:53.209357 kubelet[2785]: W0115 00:31:53.209184 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:31:53.209357 kubelet[2785]: E0115 00:31:53.209239 2785 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:31:53.209454 kubelet[2785]: E0115 00:31:53.209437 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:31:53.209484 kubelet[2785]: W0115 00:31:53.209455 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:31:53.209484 kubelet[2785]: E0115 00:31:53.209471 2785 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:31:53.209793 kubelet[2785]: E0115 00:31:53.209777 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:31:53.209793 kubelet[2785]: W0115 00:31:53.209792 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:31:53.209875 kubelet[2785]: E0115 00:31:53.209803 2785 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:31:53.210084 kubelet[2785]: E0115 00:31:53.210056 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:31:53.210084 kubelet[2785]: W0115 00:31:53.210067 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:31:53.210084 kubelet[2785]: E0115 00:31:53.210078 2785 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:31:53.210242 kubelet[2785]: E0115 00:31:53.210219 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:31:53.210242 kubelet[2785]: W0115 00:31:53.210226 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:31:53.210242 kubelet[2785]: E0115 00:31:53.210234 2785 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:31:53.210405 kubelet[2785]: E0115 00:31:53.210361 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:31:53.210405 kubelet[2785]: W0115 00:31:53.210371 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:31:53.210405 kubelet[2785]: E0115 00:31:53.210378 2785 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:31:53.210603 kubelet[2785]: E0115 00:31:53.210547 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:31:53.210603 kubelet[2785]: W0115 00:31:53.210554 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:31:53.210603 kubelet[2785]: E0115 00:31:53.210562 2785 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:31:53.210733 kubelet[2785]: E0115 00:31:53.210722 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:31:53.210733 kubelet[2785]: W0115 00:31:53.210732 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:31:53.210790 kubelet[2785]: E0115 00:31:53.210740 2785 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:31:53.210922 kubelet[2785]: E0115 00:31:53.210904 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:31:53.210922 kubelet[2785]: W0115 00:31:53.210920 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:31:53.211010 kubelet[2785]: E0115 00:31:53.210932 2785 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:31:53.211184 kubelet[2785]: E0115 00:31:53.211167 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:31:53.211221 kubelet[2785]: W0115 00:31:53.211184 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:31:53.211221 kubelet[2785]: E0115 00:31:53.211197 2785 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:31:53.211385 kubelet[2785]: E0115 00:31:53.211371 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:31:53.211385 kubelet[2785]: W0115 00:31:53.211384 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:31:53.211456 kubelet[2785]: E0115 00:31:53.211394 2785 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:31:53.211564 kubelet[2785]: E0115 00:31:53.211549 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:31:53.211564 kubelet[2785]: W0115 00:31:53.211560 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:31:53.211643 kubelet[2785]: E0115 00:31:53.211572 2785 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:31:53.211713 kubelet[2785]: E0115 00:31:53.211699 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:31:53.211713 kubelet[2785]: W0115 00:31:53.211710 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:31:53.211905 kubelet[2785]: E0115 00:31:53.211721 2785 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:31:53.211958 kubelet[2785]: E0115 00:31:53.211937 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:31:53.211958 kubelet[2785]: W0115 00:31:53.211948 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:31:53.212098 kubelet[2785]: E0115 00:31:53.211958 2785 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:31:53.212147 kubelet[2785]: E0115 00:31:53.212135 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:31:53.212147 kubelet[2785]: W0115 00:31:53.212142 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:31:53.212211 kubelet[2785]: E0115 00:31:53.212150 2785 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:31:53.212356 kubelet[2785]: E0115 00:31:53.212342 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:31:53.212356 kubelet[2785]: W0115 00:31:53.212355 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:31:53.212425 kubelet[2785]: E0115 00:31:53.212367 2785 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:31:53.212551 kubelet[2785]: E0115 00:31:53.212537 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:31:53.212551 kubelet[2785]: W0115 00:31:53.212548 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:31:53.212647 kubelet[2785]: E0115 00:31:53.212557 2785 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:31:53.212703 kubelet[2785]: E0115 00:31:53.212690 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:31:53.212703 kubelet[2785]: W0115 00:31:53.212702 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:31:53.212810 kubelet[2785]: E0115 00:31:53.212714 2785 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:31:53.212946 kubelet[2785]: E0115 00:31:53.212904 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:31:53.212946 kubelet[2785]: W0115 00:31:53.212912 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:31:53.212946 kubelet[2785]: E0115 00:31:53.212921 2785 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:31:53.213254 kubelet[2785]: E0115 00:31:53.213183 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:31:53.213254 kubelet[2785]: W0115 00:31:53.213245 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:31:53.213517 kubelet[2785]: E0115 00:31:53.213261 2785 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:31:53.213517 kubelet[2785]: I0115 00:31:53.213302 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/14ced92d-cf89-41f0-99bf-edc9c92a737b-varrun\") pod \"csi-node-driver-rjlcz\" (UID: \"14ced92d-cf89-41f0-99bf-edc9c92a737b\") " pod="calico-system/csi-node-driver-rjlcz" Jan 15 00:31:53.213601 kubelet[2785]: E0115 00:31:53.213520 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:31:53.213601 kubelet[2785]: W0115 00:31:53.213535 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:31:53.213601 kubelet[2785]: E0115 00:31:53.213555 2785 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:31:53.213601 kubelet[2785]: I0115 00:31:53.213584 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brgll\" (UniqueName: \"kubernetes.io/projected/14ced92d-cf89-41f0-99bf-edc9c92a737b-kube-api-access-brgll\") pod \"csi-node-driver-rjlcz\" (UID: \"14ced92d-cf89-41f0-99bf-edc9c92a737b\") " pod="calico-system/csi-node-driver-rjlcz" Jan 15 00:31:53.213934 kubelet[2785]: E0115 00:31:53.213748 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:31:53.213934 kubelet[2785]: W0115 00:31:53.213757 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:31:53.213934 kubelet[2785]: E0115 00:31:53.213769 2785 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:31:53.213934 kubelet[2785]: I0115 00:31:53.213784 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/14ced92d-cf89-41f0-99bf-edc9c92a737b-socket-dir\") pod \"csi-node-driver-rjlcz\" (UID: \"14ced92d-cf89-41f0-99bf-edc9c92a737b\") " pod="calico-system/csi-node-driver-rjlcz" Jan 15 00:31:53.214218 kubelet[2785]: E0115 00:31:53.214008 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:31:53.214218 kubelet[2785]: W0115 00:31:53.214105 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:31:53.214218 kubelet[2785]: E0115 00:31:53.214128 2785 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:31:53.214218 kubelet[2785]: I0115 00:31:53.214152 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/14ced92d-cf89-41f0-99bf-edc9c92a737b-kubelet-dir\") pod \"csi-node-driver-rjlcz\" (UID: \"14ced92d-cf89-41f0-99bf-edc9c92a737b\") " pod="calico-system/csi-node-driver-rjlcz" Jan 15 00:31:53.214371 kubelet[2785]: E0115 00:31:53.214357 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:31:53.214371 kubelet[2785]: W0115 00:31:53.214370 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:31:53.214460 kubelet[2785]: E0115 00:31:53.214384 2785 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:31:53.214460 kubelet[2785]: I0115 00:31:53.214400 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/14ced92d-cf89-41f0-99bf-edc9c92a737b-registration-dir\") pod \"csi-node-driver-rjlcz\" (UID: \"14ced92d-cf89-41f0-99bf-edc9c92a737b\") " pod="calico-system/csi-node-driver-rjlcz" Jan 15 00:31:53.214585 kubelet[2785]: E0115 00:31:53.214575 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:31:53.214618 kubelet[2785]: W0115 00:31:53.214584 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:31:53.214618 kubelet[2785]: E0115 00:31:53.214597 2785 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:31:53.214753 kubelet[2785]: E0115 00:31:53.214726 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:31:53.214753 kubelet[2785]: W0115 00:31:53.214734 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:31:53.214872 kubelet[2785]: E0115 00:31:53.214784 2785 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:31:53.214872 kubelet[2785]: E0115 00:31:53.214859 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:31:53.214872 kubelet[2785]: W0115 00:31:53.214867 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:31:53.214983 kubelet[2785]: E0115 00:31:53.214891 2785 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:31:53.215055 kubelet[2785]: E0115 00:31:53.215033 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:31:53.215055 kubelet[2785]: W0115 00:31:53.215046 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:31:53.215249 kubelet[2785]: E0115 00:31:53.215081 2785 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:31:53.215249 kubelet[2785]: E0115 00:31:53.215194 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:31:53.215249 kubelet[2785]: W0115 00:31:53.215201 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:31:53.215249 kubelet[2785]: E0115 00:31:53.215221 2785 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:31:53.215410 kubelet[2785]: E0115 00:31:53.215337 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:31:53.215410 kubelet[2785]: W0115 00:31:53.215344 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:31:53.215410 kubelet[2785]: E0115 00:31:53.215366 2785 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:31:53.215574 kubelet[2785]: E0115 00:31:53.215475 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:31:53.215574 kubelet[2785]: W0115 00:31:53.215482 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:31:53.215574 kubelet[2785]: E0115 00:31:53.215491 2785 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:31:53.215681 kubelet[2785]: E0115 00:31:53.215667 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:31:53.215726 kubelet[2785]: W0115 00:31:53.215682 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:31:53.215726 kubelet[2785]: E0115 00:31:53.215695 2785 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:31:53.215878 kubelet[2785]: E0115 00:31:53.215868 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:31:53.215878 kubelet[2785]: W0115 00:31:53.215878 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:31:53.215950 kubelet[2785]: E0115 00:31:53.215886 2785 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:31:53.216097 kubelet[2785]: E0115 00:31:53.216087 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:31:53.216097 kubelet[2785]: W0115 00:31:53.216096 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:31:53.216161 kubelet[2785]: E0115 00:31:53.216106 2785 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:31:53.315048 kubelet[2785]: E0115 00:31:53.314937 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:31:53.315048 kubelet[2785]: W0115 00:31:53.314964 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:31:53.315048 kubelet[2785]: E0115 00:31:53.314987 2785 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:31:53.315417 kubelet[2785]: E0115 00:31:53.315229 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:31:53.315417 kubelet[2785]: W0115 00:31:53.315238 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:31:53.315417 kubelet[2785]: E0115 00:31:53.315253 2785 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:31:53.315511 kubelet[2785]: E0115 00:31:53.315437 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:31:53.315511 kubelet[2785]: W0115 00:31:53.315445 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:31:53.315511 kubelet[2785]: E0115 00:31:53.315454 2785 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:31:53.315742 kubelet[2785]: E0115 00:31:53.315614 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:31:53.315742 kubelet[2785]: W0115 00:31:53.315625 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:31:53.315742 kubelet[2785]: E0115 00:31:53.315638 2785 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:31:53.316071 kubelet[2785]: E0115 00:31:53.315888 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:31:53.316071 kubelet[2785]: W0115 00:31:53.315905 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:31:53.316071 kubelet[2785]: E0115 00:31:53.315927 2785 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:31:53.316217 kubelet[2785]: E0115 00:31:53.316206 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:31:53.316291 kubelet[2785]: W0115 00:31:53.316280 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:31:53.316414 kubelet[2785]: E0115 00:31:53.316392 2785 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:31:53.316631 kubelet[2785]: E0115 00:31:53.316615 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:31:53.316699 kubelet[2785]: W0115 00:31:53.316688 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:31:53.316863 kubelet[2785]: E0115 00:31:53.316846 2785 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:31:53.317129 kubelet[2785]: E0115 00:31:53.317116 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:31:53.317371 kubelet[2785]: W0115 00:31:53.317272 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:31:53.317371 kubelet[2785]: E0115 00:31:53.317310 2785 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:31:53.317562 kubelet[2785]: E0115 00:31:53.317542 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:31:53.317562 kubelet[2785]: W0115 00:31:53.317559 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:31:53.317702 kubelet[2785]: E0115 00:31:53.317624 2785 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:31:53.317739 kubelet[2785]: E0115 00:31:53.317725 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:31:53.317739 kubelet[2785]: W0115 00:31:53.317732 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:31:53.318004 kubelet[2785]: E0115 00:31:53.317816 2785 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:31:53.318004 kubelet[2785]: E0115 00:31:53.317875 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:31:53.318004 kubelet[2785]: W0115 00:31:53.317882 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:31:53.318004 kubelet[2785]: E0115 00:31:53.317901 2785 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:31:53.318361 kubelet[2785]: E0115 00:31:53.318045 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:31:53.318361 kubelet[2785]: W0115 00:31:53.318052 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:31:53.318361 kubelet[2785]: E0115 00:31:53.318068 2785 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:31:53.318361 kubelet[2785]: E0115 00:31:53.318208 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:31:53.318361 kubelet[2785]: W0115 00:31:53.318215 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:31:53.318361 kubelet[2785]: E0115 00:31:53.318226 2785 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:31:53.319372 kubelet[2785]: E0115 00:31:53.318408 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:31:53.319372 kubelet[2785]: W0115 00:31:53.318416 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:31:53.319372 kubelet[2785]: E0115 00:31:53.318428 2785 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:31:53.319372 kubelet[2785]: E0115 00:31:53.319152 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:31:53.319372 kubelet[2785]: W0115 00:31:53.319170 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:31:53.319372 kubelet[2785]: E0115 00:31:53.319195 2785 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:31:53.320467 kubelet[2785]: E0115 00:31:53.320234 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:31:53.320467 kubelet[2785]: W0115 00:31:53.320269 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:31:53.320467 kubelet[2785]: E0115 00:31:53.320308 2785 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:31:53.321140 kubelet[2785]: E0115 00:31:53.320971 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:31:53.321140 kubelet[2785]: W0115 00:31:53.320991 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:31:53.321140 kubelet[2785]: E0115 00:31:53.321077 2785 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:31:53.321683 kubelet[2785]: E0115 00:31:53.321637 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:31:53.321940 kubelet[2785]: W0115 00:31:53.321778 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:31:53.321940 kubelet[2785]: E0115 00:31:53.321835 2785 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:31:53.322250 kubelet[2785]: E0115 00:31:53.322236 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:31:53.322399 kubelet[2785]: W0115 00:31:53.322308 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:31:53.322399 kubelet[2785]: E0115 00:31:53.322346 2785 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:31:53.323145 kubelet[2785]: E0115 00:31:53.323130 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:31:53.323257 kubelet[2785]: W0115 00:31:53.323243 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:31:53.323489 kubelet[2785]: E0115 00:31:53.323402 2785 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:31:53.323748 kubelet[2785]: E0115 00:31:53.323725 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:31:53.323992 kubelet[2785]: W0115 00:31:53.323923 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:31:53.324194 kubelet[2785]: E0115 00:31:53.324171 2785 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:31:53.324362 kubelet[2785]: E0115 00:31:53.324339 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:31:53.324571 kubelet[2785]: W0115 00:31:53.324454 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:31:53.324571 kubelet[2785]: E0115 00:31:53.324477 2785 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:31:53.324839 kubelet[2785]: E0115 00:31:53.324819 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:31:53.324951 kubelet[2785]: W0115 00:31:53.324928 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:31:53.325287 kubelet[2785]: E0115 00:31:53.325257 2785 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:31:53.325983 kubelet[2785]: E0115 00:31:53.325961 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:31:53.325983 kubelet[2785]: W0115 00:31:53.325977 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:31:53.326301 kubelet[2785]: E0115 00:31:53.325996 2785 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:31:53.326301 kubelet[2785]: E0115 00:31:53.326271 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:31:53.326301 kubelet[2785]: W0115 00:31:53.326283 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:31:53.326301 kubelet[2785]: E0115 00:31:53.326299 2785 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:31:53.846000 audit[3265]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=3265 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:31:53.855076 kernel: audit: type=1325 audit(1768437113.846:540): table=filter:117 family=2 entries=22 op=nft_register_rule pid=3265 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:31:53.855187 kernel: audit: type=1300 audit(1768437113.846:540): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fffb04919c0 a2=0 a3=7fffb04919ac items=0 ppid=2937 pid=3265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:53.846000 audit[3265]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fffb04919c0 a2=0 a3=7fffb04919ac items=0 ppid=2937 pid=3265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:53.846000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:31:53.861146 kernel: audit: type=1327 audit(1768437113.846:540): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:31:53.860000 audit[3265]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=3265 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:31:53.865804 kernel: audit: type=1325 audit(1768437113.860:541): table=nat:118 family=2 entries=12 op=nft_register_rule pid=3265 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:31:53.860000 audit[3265]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffb04919c0 a2=0 a3=0 items=0 ppid=2937 pid=3265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:53.860000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:31:53.875337 kubelet[2785]: E0115 00:31:53.874891 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:31:53.875337 kubelet[2785]: W0115 00:31:53.874928 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:31:53.875337 kubelet[2785]: E0115 00:31:53.875057 2785 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:31:53.878325 kubelet[2785]: E0115 00:31:53.878274 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:31:53.878325 kubelet[2785]: W0115 00:31:53.878308 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:31:53.878644 kubelet[2785]: E0115 00:31:53.878356 2785 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:31:53.878730 kubelet[2785]: E0115 00:31:53.878671 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:31:53.878730 kubelet[2785]: W0115 00:31:53.878685 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:31:53.878730 kubelet[2785]: E0115 00:31:53.878698 2785 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:31:53.908577 kubelet[2785]: E0115 00:31:53.908197 2785 secret.go:189] Couldn't get secret calico-system/typha-certs: failed to sync secret cache: timed out waiting for the condition Jan 15 00:31:53.908577 kubelet[2785]: E0115 00:31:53.908316 2785 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a0355938-8a35-401b-ad9e-0f9ebdbf44ef-typha-certs podName:a0355938-8a35-401b-ad9e-0f9ebdbf44ef nodeName:}" failed. No retries permitted until 2026-01-15 00:31:54.408292314 +0000 UTC m=+25.624555559 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "typha-certs" (UniqueName: "kubernetes.io/secret/a0355938-8a35-401b-ad9e-0f9ebdbf44ef-typha-certs") pod "calico-typha-784d8b8c59-hvvck" (UID: "a0355938-8a35-401b-ad9e-0f9ebdbf44ef") : failed to sync secret cache: timed out waiting for the condition Jan 15 00:31:53.909070 kubelet[2785]: E0115 00:31:53.909045 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:31:53.909126 kubelet[2785]: W0115 00:31:53.909067 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:31:53.909126 kubelet[2785]: E0115 00:31:53.909112 2785 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:31:53.914898 kubelet[2785]: E0115 00:31:53.914857 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:31:53.914898 kubelet[2785]: W0115 00:31:53.914884 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:31:53.914898 kubelet[2785]: E0115 00:31:53.914909 2785 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:31:53.917431 kubelet[2785]: E0115 00:31:53.917273 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 15 00:31:53.918848 containerd[1616]: time="2026-01-15T00:31:53.918791526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mq72d,Uid:5ff1b46a-31bd-4fa1-a3bb-58df39be5f06,Namespace:calico-system,Attempt:0,}" Jan 15 00:31:53.921600 kubelet[2785]: E0115 00:31:53.921314 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:31:53.921600 kubelet[2785]: W0115 00:31:53.921517 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:31:53.922293 kubelet[2785]: E0115 00:31:53.921560 2785 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:31:53.952810 containerd[1616]: time="2026-01-15T00:31:53.952701261Z" level=info msg="connecting to shim 4ff8348dd91ce219d46462ed5a2906a819e0521d575d304c03ee4c1d11f340a6" address="unix:///run/containerd/s/b2aac25a5be3a1564bfe713a18d86619f160fb47856605f26ea1582225a381ec" namespace=k8s.io protocol=ttrpc version=3 Jan 15 00:31:54.002410 systemd[1]: Started cri-containerd-4ff8348dd91ce219d46462ed5a2906a819e0521d575d304c03ee4c1d11f340a6.scope - libcontainer container 4ff8348dd91ce219d46462ed5a2906a819e0521d575d304c03ee4c1d11f340a6. Jan 15 00:31:54.016000 audit: BPF prog-id=151 op=LOAD Jan 15 00:31:54.017000 audit: BPF prog-id=152 op=LOAD Jan 15 00:31:54.017000 audit[3296]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3285 pid=3296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:54.017000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466663833343864643931636532313964343634363265643561323930 Jan 15 00:31:54.017000 audit: BPF prog-id=152 op=UNLOAD Jan 15 00:31:54.017000 audit[3296]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3285 pid=3296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:54.017000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466663833343864643931636532313964343634363265643561323930 Jan 15 00:31:54.018000 audit: BPF prog-id=153 op=LOAD Jan 15 00:31:54.018000 audit[3296]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3285 pid=3296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:54.018000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466663833343864643931636532313964343634363265643561323930 Jan 15 00:31:54.018000 audit: BPF prog-id=154 op=LOAD Jan 15 00:31:54.018000 audit[3296]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3285 pid=3296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:54.018000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466663833343864643931636532313964343634363265643561323930 Jan 15 00:31:54.018000 audit: BPF prog-id=154 op=UNLOAD Jan 15 00:31:54.018000 audit[3296]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3285 pid=3296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:54.018000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466663833343864643931636532313964343634363265643561323930 Jan 15 00:31:54.019000 audit: BPF prog-id=153 op=UNLOAD Jan 15 00:31:54.019000 audit[3296]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3285 pid=3296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:54.019000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466663833343864643931636532313964343634363265643561323930 Jan 15 00:31:54.019000 audit: BPF prog-id=155 op=LOAD Jan 15 00:31:54.019000 audit[3296]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3285 pid=3296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:54.019000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466663833343864643931636532313964343634363265643561323930 Jan 15 00:31:54.024148 kubelet[2785]: E0115 00:31:54.024100 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:31:54.024148 kubelet[2785]: W0115 00:31:54.024129 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:31:54.024148 kubelet[2785]: E0115 00:31:54.024155 2785 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:31:54.049352 containerd[1616]: time="2026-01-15T00:31:54.049167606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mq72d,Uid:5ff1b46a-31bd-4fa1-a3bb-58df39be5f06,Namespace:calico-system,Attempt:0,} returns sandbox id \"4ff8348dd91ce219d46462ed5a2906a819e0521d575d304c03ee4c1d11f340a6\"" Jan 15 00:31:54.052592 kubelet[2785]: E0115 00:31:54.052543 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 15 00:31:54.057093 containerd[1616]: time="2026-01-15T00:31:54.057038353Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 15 00:31:54.126243 kubelet[2785]: E0115 00:31:54.125842 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:31:54.126243 kubelet[2785]: W0115 00:31:54.125999 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:31:54.126243 kubelet[2785]: E0115 00:31:54.126144 2785 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:31:54.229517 kubelet[2785]: E0115 00:31:54.229135 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:31:54.229778 kubelet[2785]: W0115 00:31:54.229602 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:31:54.229909 kubelet[2785]: E0115 00:31:54.229828 2785 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:31:54.331006 kubelet[2785]: E0115 00:31:54.330943 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:31:54.331006 kubelet[2785]: W0115 00:31:54.330972 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:31:54.331409 kubelet[2785]: E0115 00:31:54.331287 2785 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:31:54.432726 kubelet[2785]: E0115 00:31:54.432566 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:31:54.432726 kubelet[2785]: W0115 00:31:54.432601 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:31:54.432726 kubelet[2785]: E0115 00:31:54.432631 2785 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:31:54.433471 kubelet[2785]: E0115 00:31:54.433393 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:31:54.433471 kubelet[2785]: W0115 00:31:54.433415 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:31:54.433471 kubelet[2785]: E0115 00:31:54.433438 2785 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:31:54.433702 kubelet[2785]: E0115 00:31:54.433674 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:31:54.433801 kubelet[2785]: W0115 00:31:54.433708 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:31:54.433801 kubelet[2785]: E0115 00:31:54.433726 2785 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:31:54.433988 kubelet[2785]: E0115 00:31:54.433972 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:31:54.434047 kubelet[2785]: W0115 00:31:54.433989 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:31:54.434047 kubelet[2785]: E0115 00:31:54.434005 2785 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:31:54.434355 kubelet[2785]: E0115 00:31:54.434290 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:31:54.434355 kubelet[2785]: W0115 00:31:54.434307 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:31:54.434355 kubelet[2785]: E0115 00:31:54.434324 2785 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:31:54.442613 kubelet[2785]: E0115 00:31:54.442476 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:31:54.442613 kubelet[2785]: W0115 00:31:54.442523 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:31:54.442613 kubelet[2785]: E0115 00:31:54.442548 2785 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:31:54.599079 kubelet[2785]: E0115 00:31:54.598731 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 15 00:31:54.601098 containerd[1616]: time="2026-01-15T00:31:54.599431158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-784d8b8c59-hvvck,Uid:a0355938-8a35-401b-ad9e-0f9ebdbf44ef,Namespace:calico-system,Attempt:0,}" Jan 15 00:31:54.635978 containerd[1616]: time="2026-01-15T00:31:54.635073172Z" level=info msg="connecting to shim d256f7077bac62b937b84971e2e961df178e725eebb25a83a70fbbcb3386e6c9" address="unix:///run/containerd/s/e21a46ea9a8c7ac26eab92d1fc112d0588b3df13d0aa8b506a87af3a3bd8750b" namespace=k8s.io protocol=ttrpc version=3 Jan 15 00:31:54.673511 systemd[1]: Started cri-containerd-d256f7077bac62b937b84971e2e961df178e725eebb25a83a70fbbcb3386e6c9.scope - libcontainer container d256f7077bac62b937b84971e2e961df178e725eebb25a83a70fbbcb3386e6c9. Jan 15 00:31:54.690000 audit: BPF prog-id=156 op=LOAD Jan 15 00:31:54.691000 audit: BPF prog-id=157 op=LOAD Jan 15 00:31:54.691000 audit[3353]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=3342 pid=3353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:54.691000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432353666373037376261633632623933376238343937316532653936 Jan 15 00:31:54.691000 audit: BPF prog-id=157 op=UNLOAD Jan 15 00:31:54.691000 audit[3353]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3342 pid=3353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:54.691000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432353666373037376261633632623933376238343937316532653936 Jan 15 00:31:54.691000 audit: BPF prog-id=158 op=LOAD Jan 15 00:31:54.691000 audit[3353]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=3342 pid=3353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:54.691000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432353666373037376261633632623933376238343937316532653936 Jan 15 00:31:54.691000 audit: BPF prog-id=159 op=LOAD Jan 15 00:31:54.691000 audit[3353]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=3342 pid=3353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:54.691000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432353666373037376261633632623933376238343937316532653936 Jan 15 00:31:54.691000 audit: BPF prog-id=159 op=UNLOAD Jan 15 00:31:54.691000 audit[3353]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3342 pid=3353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:54.691000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432353666373037376261633632623933376238343937316532653936 Jan 15 00:31:54.691000 audit: BPF prog-id=158 op=UNLOAD Jan 15 00:31:54.691000 audit[3353]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3342 pid=3353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:54.691000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432353666373037376261633632623933376238343937316532653936 Jan 15 00:31:54.691000 audit: BPF prog-id=160 op=LOAD Jan 15 00:31:54.691000 audit[3353]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=3342 pid=3353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:54.691000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432353666373037376261633632623933376238343937316532653936 Jan 15 00:31:54.738224 containerd[1616]: time="2026-01-15T00:31:54.737934566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-784d8b8c59-hvvck,Uid:a0355938-8a35-401b-ad9e-0f9ebdbf44ef,Namespace:calico-system,Attempt:0,} returns sandbox id \"d256f7077bac62b937b84971e2e961df178e725eebb25a83a70fbbcb3386e6c9\"" Jan 15 00:31:54.741316 kubelet[2785]: E0115 00:31:54.740149 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 15 00:31:54.988743 kubelet[2785]: E0115 00:31:54.986378 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rjlcz" podUID="14ced92d-cf89-41f0-99bf-edc9c92a737b" Jan 15 00:31:55.686402 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1170919624.mount: Deactivated successfully. Jan 15 00:31:55.785844 containerd[1616]: time="2026-01-15T00:31:55.785121925Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:31:55.787225 containerd[1616]: time="2026-01-15T00:31:55.787168852Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Jan 15 00:31:55.787893 containerd[1616]: time="2026-01-15T00:31:55.787853376Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:31:55.789893 containerd[1616]: time="2026-01-15T00:31:55.789855278Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:31:55.790739 containerd[1616]: time="2026-01-15T00:31:55.790352884Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.73324832s" Jan 15 00:31:55.790739 containerd[1616]: time="2026-01-15T00:31:55.790397804Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 15 00:31:55.792069 containerd[1616]: time="2026-01-15T00:31:55.791996995Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 15 00:31:55.795263 containerd[1616]: time="2026-01-15T00:31:55.795212281Z" level=info msg="CreateContainer within sandbox \"4ff8348dd91ce219d46462ed5a2906a819e0521d575d304c03ee4c1d11f340a6\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 15 00:31:55.805346 containerd[1616]: time="2026-01-15T00:31:55.805284534Z" level=info msg="Container 7b03e701ca691959283cc50c9b32944e796738a325718b8325e5da9c1f8afd2b: CDI devices from CRI Config.CDIDevices: []" Jan 15 00:31:55.824520 containerd[1616]: time="2026-01-15T00:31:55.824388568Z" level=info msg="CreateContainer within sandbox \"4ff8348dd91ce219d46462ed5a2906a819e0521d575d304c03ee4c1d11f340a6\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"7b03e701ca691959283cc50c9b32944e796738a325718b8325e5da9c1f8afd2b\"" Jan 15 00:31:55.825523 containerd[1616]: time="2026-01-15T00:31:55.825128020Z" level=info msg="StartContainer for \"7b03e701ca691959283cc50c9b32944e796738a325718b8325e5da9c1f8afd2b\"" Jan 15 00:31:55.827253 containerd[1616]: time="2026-01-15T00:31:55.827216205Z" level=info msg="connecting to shim 7b03e701ca691959283cc50c9b32944e796738a325718b8325e5da9c1f8afd2b" address="unix:///run/containerd/s/b2aac25a5be3a1564bfe713a18d86619f160fb47856605f26ea1582225a381ec" protocol=ttrpc version=3 Jan 15 00:31:55.859776 systemd[1]: Started cri-containerd-7b03e701ca691959283cc50c9b32944e796738a325718b8325e5da9c1f8afd2b.scope - libcontainer container 7b03e701ca691959283cc50c9b32944e796738a325718b8325e5da9c1f8afd2b. Jan 15 00:31:55.923000 audit: BPF prog-id=161 op=LOAD Jan 15 00:31:55.923000 audit[3387]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=3285 pid=3387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:55.923000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762303365373031636136393139353932383363633530633962333239 Jan 15 00:31:55.923000 audit: BPF prog-id=162 op=LOAD Jan 15 00:31:55.923000 audit[3387]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=3285 pid=3387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:55.923000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762303365373031636136393139353932383363633530633962333239 Jan 15 00:31:55.923000 audit: BPF prog-id=162 op=UNLOAD Jan 15 00:31:55.923000 audit[3387]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3285 pid=3387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:55.923000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762303365373031636136393139353932383363633530633962333239 Jan 15 00:31:55.923000 audit: BPF prog-id=161 op=UNLOAD Jan 15 00:31:55.923000 audit[3387]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3285 pid=3387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:55.923000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762303365373031636136393139353932383363633530633962333239 Jan 15 00:31:55.923000 audit: BPF prog-id=163 op=LOAD Jan 15 00:31:55.923000 audit[3387]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=3285 pid=3387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:55.923000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762303365373031636136393139353932383363633530633962333239 Jan 15 00:31:55.956725 containerd[1616]: time="2026-01-15T00:31:55.955614216Z" level=info msg="StartContainer for \"7b03e701ca691959283cc50c9b32944e796738a325718b8325e5da9c1f8afd2b\" returns successfully" Jan 15 00:31:55.967316 systemd[1]: cri-containerd-7b03e701ca691959283cc50c9b32944e796738a325718b8325e5da9c1f8afd2b.scope: Deactivated successfully. Jan 15 00:31:55.972000 audit: BPF prog-id=163 op=UNLOAD Jan 15 00:31:55.982472 containerd[1616]: time="2026-01-15T00:31:55.982398292Z" level=info msg="received container exit event container_id:\"7b03e701ca691959283cc50c9b32944e796738a325718b8325e5da9c1f8afd2b\" id:\"7b03e701ca691959283cc50c9b32944e796738a325718b8325e5da9c1f8afd2b\" pid:3400 exited_at:{seconds:1768437115 nanos:970234282}" Jan 15 00:31:56.021918 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7b03e701ca691959283cc50c9b32944e796738a325718b8325e5da9c1f8afd2b-rootfs.mount: Deactivated successfully. Jan 15 00:31:56.170783 kubelet[2785]: E0115 00:31:56.169953 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 15 00:31:56.986052 kubelet[2785]: E0115 00:31:56.985943 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rjlcz" podUID="14ced92d-cf89-41f0-99bf-edc9c92a737b" Jan 15 00:31:58.277057 containerd[1616]: time="2026-01-15T00:31:58.276242807Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:31:58.278042 containerd[1616]: time="2026-01-15T00:31:58.277980684Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Jan 15 00:31:58.299941 containerd[1616]: time="2026-01-15T00:31:58.299861843Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:31:58.308894 containerd[1616]: time="2026-01-15T00:31:58.308746118Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:31:58.309745 containerd[1616]: time="2026-01-15T00:31:58.309706609Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.51557129s" Jan 15 00:31:58.309885 containerd[1616]: time="2026-01-15T00:31:58.309870885Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 15 00:31:58.328362 containerd[1616]: time="2026-01-15T00:31:58.328318954Z" level=info msg="CreateContainer within sandbox \"d256f7077bac62b937b84971e2e961df178e725eebb25a83a70fbbcb3386e6c9\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 15 00:31:58.340837 containerd[1616]: time="2026-01-15T00:31:58.340800795Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 15 00:31:58.341542 containerd[1616]: time="2026-01-15T00:31:58.341490803Z" level=info msg="Container 4bc62801a69b1dfca614726cb3c177d8f4c1a5d9ec8c23466460c6a5db070488: CDI devices from CRI Config.CDIDevices: []" Jan 15 00:31:58.350062 containerd[1616]: time="2026-01-15T00:31:58.349948462Z" level=info msg="CreateContainer within sandbox \"d256f7077bac62b937b84971e2e961df178e725eebb25a83a70fbbcb3386e6c9\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"4bc62801a69b1dfca614726cb3c177d8f4c1a5d9ec8c23466460c6a5db070488\"" Jan 15 00:31:58.351164 containerd[1616]: time="2026-01-15T00:31:58.351136065Z" level=info msg="StartContainer for \"4bc62801a69b1dfca614726cb3c177d8f4c1a5d9ec8c23466460c6a5db070488\"" Jan 15 00:31:58.368707 containerd[1616]: time="2026-01-15T00:31:58.368297214Z" level=info msg="connecting to shim 4bc62801a69b1dfca614726cb3c177d8f4c1a5d9ec8c23466460c6a5db070488" address="unix:///run/containerd/s/e21a46ea9a8c7ac26eab92d1fc112d0588b3df13d0aa8b506a87af3a3bd8750b" protocol=ttrpc version=3 Jan 15 00:31:58.400330 systemd[1]: Started cri-containerd-4bc62801a69b1dfca614726cb3c177d8f4c1a5d9ec8c23466460c6a5db070488.scope - libcontainer container 4bc62801a69b1dfca614726cb3c177d8f4c1a5d9ec8c23466460c6a5db070488. Jan 15 00:31:58.417000 audit: BPF prog-id=164 op=LOAD Jan 15 00:31:58.419478 kernel: kauditd_printk_skb: 62 callbacks suppressed Jan 15 00:31:58.419576 kernel: audit: type=1334 audit(1768437118.417:564): prog-id=164 op=LOAD Jan 15 00:31:58.420000 audit: BPF prog-id=165 op=LOAD Jan 15 00:31:58.423135 kernel: audit: type=1334 audit(1768437118.420:565): prog-id=165 op=LOAD Jan 15 00:31:58.420000 audit[3443]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=3342 pid=3443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:58.428085 kernel: audit: type=1300 audit(1768437118.420:565): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=3342 pid=3443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:58.420000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462633632383031613639623164666361363134373236636233633137 Jan 15 00:31:58.433043 kernel: audit: type=1327 audit(1768437118.420:565): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462633632383031613639623164666361363134373236636233633137 Jan 15 00:31:58.420000 audit: BPF prog-id=165 op=UNLOAD Jan 15 00:31:58.420000 audit[3443]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3342 pid=3443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:58.437051 kernel: audit: type=1334 audit(1768437118.420:566): prog-id=165 op=UNLOAD Jan 15 00:31:58.437303 kernel: audit: type=1300 audit(1768437118.420:566): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3342 pid=3443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:58.420000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462633632383031613639623164666361363134373236636233633137 Jan 15 00:31:58.441476 kernel: audit: type=1327 audit(1768437118.420:566): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462633632383031613639623164666361363134373236636233633137 Jan 15 00:31:58.421000 audit: BPF prog-id=166 op=LOAD Jan 15 00:31:58.421000 audit[3443]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=3342 pid=3443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:58.447674 kernel: audit: type=1334 audit(1768437118.421:567): prog-id=166 op=LOAD Jan 15 00:31:58.447784 kernel: audit: type=1300 audit(1768437118.421:567): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=3342 pid=3443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:58.421000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462633632383031613639623164666361363134373236636233633137 Jan 15 00:31:58.451806 kernel: audit: type=1327 audit(1768437118.421:567): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462633632383031613639623164666361363134373236636233633137 Jan 15 00:31:58.421000 audit: BPF prog-id=167 op=LOAD Jan 15 00:31:58.421000 audit[3443]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=3342 pid=3443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:58.421000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462633632383031613639623164666361363134373236636233633137 Jan 15 00:31:58.421000 audit: BPF prog-id=167 op=UNLOAD Jan 15 00:31:58.421000 audit[3443]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3342 pid=3443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:58.421000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462633632383031613639623164666361363134373236636233633137 Jan 15 00:31:58.421000 audit: BPF prog-id=166 op=UNLOAD Jan 15 00:31:58.421000 audit[3443]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3342 pid=3443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:58.421000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462633632383031613639623164666361363134373236636233633137 Jan 15 00:31:58.421000 audit: BPF prog-id=168 op=LOAD Jan 15 00:31:58.421000 audit[3443]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=3342 pid=3443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:58.421000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462633632383031613639623164666361363134373236636233633137 Jan 15 00:31:58.485560 containerd[1616]: time="2026-01-15T00:31:58.485514394Z" level=info msg="StartContainer for \"4bc62801a69b1dfca614726cb3c177d8f4c1a5d9ec8c23466460c6a5db070488\" returns successfully" Jan 15 00:31:59.004182 kubelet[2785]: E0115 00:31:59.003974 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rjlcz" podUID="14ced92d-cf89-41f0-99bf-edc9c92a737b" Jan 15 00:31:59.185058 kubelet[2785]: E0115 00:31:59.184744 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 15 00:31:59.222069 kubelet[2785]: I0115 00:31:59.221978 2785 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-784d8b8c59-hvvck" podStartSLOduration=3.65304595 podStartE2EDuration="7.221950239s" podCreationTimestamp="2026-01-15 00:31:52 +0000 UTC" firstStartedPulling="2026-01-15 00:31:54.741922902 +0000 UTC m=+25.958186160" lastFinishedPulling="2026-01-15 00:31:58.31082719 +0000 UTC m=+29.527090449" observedRunningTime="2026-01-15 00:31:59.218053728 +0000 UTC m=+30.434316993" watchObservedRunningTime="2026-01-15 00:31:59.221950239 +0000 UTC m=+30.438213505" Jan 15 00:31:59.262000 audit[3482]: NETFILTER_CFG table=filter:119 family=2 entries=21 op=nft_register_rule pid=3482 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:31:59.262000 audit[3482]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff7c4223d0 a2=0 a3=7fff7c4223bc items=0 ppid=2937 pid=3482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:59.262000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:31:59.266000 audit[3482]: NETFILTER_CFG table=nat:120 family=2 entries=19 op=nft_register_chain pid=3482 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:31:59.266000 audit[3482]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7fff7c4223d0 a2=0 a3=7fff7c4223bc items=0 ppid=2937 pid=3482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:31:59.266000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:32:00.206165 kubelet[2785]: E0115 00:32:00.205978 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 15 00:32:00.987879 kubelet[2785]: E0115 00:32:00.985951 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rjlcz" podUID="14ced92d-cf89-41f0-99bf-edc9c92a737b" Jan 15 00:32:01.191940 kubelet[2785]: E0115 00:32:01.191904 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 15 00:32:01.780433 containerd[1616]: time="2026-01-15T00:32:01.780323292Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:32:01.782761 containerd[1616]: time="2026-01-15T00:32:01.782677610Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Jan 15 00:32:01.783870 containerd[1616]: time="2026-01-15T00:32:01.783796057Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:32:01.788386 containerd[1616]: time="2026-01-15T00:32:01.787434636Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:32:01.788386 containerd[1616]: time="2026-01-15T00:32:01.788073514Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.44706249s" Jan 15 00:32:01.788386 containerd[1616]: time="2026-01-15T00:32:01.788122399Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 15 00:32:01.811812 containerd[1616]: time="2026-01-15T00:32:01.811740617Z" level=info msg="CreateContainer within sandbox \"4ff8348dd91ce219d46462ed5a2906a819e0521d575d304c03ee4c1d11f340a6\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 15 00:32:01.837396 containerd[1616]: time="2026-01-15T00:32:01.837319844Z" level=info msg="Container 35b5241b148f760174d31268a1fcf59211df6a470298a33cec532ca6fa3f5787: CDI devices from CRI Config.CDIDevices: []" Jan 15 00:32:01.842635 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3090906447.mount: Deactivated successfully. Jan 15 00:32:01.861390 containerd[1616]: time="2026-01-15T00:32:01.861098318Z" level=info msg="CreateContainer within sandbox \"4ff8348dd91ce219d46462ed5a2906a819e0521d575d304c03ee4c1d11f340a6\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"35b5241b148f760174d31268a1fcf59211df6a470298a33cec532ca6fa3f5787\"" Jan 15 00:32:01.864064 containerd[1616]: time="2026-01-15T00:32:01.862100425Z" level=info msg="StartContainer for \"35b5241b148f760174d31268a1fcf59211df6a470298a33cec532ca6fa3f5787\"" Jan 15 00:32:01.864666 containerd[1616]: time="2026-01-15T00:32:01.864613413Z" level=info msg="connecting to shim 35b5241b148f760174d31268a1fcf59211df6a470298a33cec532ca6fa3f5787" address="unix:///run/containerd/s/b2aac25a5be3a1564bfe713a18d86619f160fb47856605f26ea1582225a381ec" protocol=ttrpc version=3 Jan 15 00:32:01.902400 systemd[1]: Started cri-containerd-35b5241b148f760174d31268a1fcf59211df6a470298a33cec532ca6fa3f5787.scope - libcontainer container 35b5241b148f760174d31268a1fcf59211df6a470298a33cec532ca6fa3f5787. Jan 15 00:32:01.970000 audit: BPF prog-id=169 op=LOAD Jan 15 00:32:01.970000 audit[3491]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=3285 pid=3491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:01.970000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335623532343162313438663736303137346433313236386131666366 Jan 15 00:32:01.970000 audit: BPF prog-id=170 op=LOAD Jan 15 00:32:01.970000 audit[3491]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=3285 pid=3491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:01.970000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335623532343162313438663736303137346433313236386131666366 Jan 15 00:32:01.971000 audit: BPF prog-id=170 op=UNLOAD Jan 15 00:32:01.971000 audit[3491]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3285 pid=3491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:01.971000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335623532343162313438663736303137346433313236386131666366 Jan 15 00:32:01.971000 audit: BPF prog-id=169 op=UNLOAD Jan 15 00:32:01.971000 audit[3491]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3285 pid=3491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:01.971000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335623532343162313438663736303137346433313236386131666366 Jan 15 00:32:01.971000 audit: BPF prog-id=171 op=LOAD Jan 15 00:32:01.971000 audit[3491]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=3285 pid=3491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:01.971000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335623532343162313438663736303137346433313236386131666366 Jan 15 00:32:02.008821 containerd[1616]: time="2026-01-15T00:32:02.008743268Z" level=info msg="StartContainer for \"35b5241b148f760174d31268a1fcf59211df6a470298a33cec532ca6fa3f5787\" returns successfully" Jan 15 00:32:02.209069 kubelet[2785]: E0115 00:32:02.208881 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 15 00:32:02.758524 systemd[1]: cri-containerd-35b5241b148f760174d31268a1fcf59211df6a470298a33cec532ca6fa3f5787.scope: Deactivated successfully. Jan 15 00:32:02.758999 systemd[1]: cri-containerd-35b5241b148f760174d31268a1fcf59211df6a470298a33cec532ca6fa3f5787.scope: Consumed 735ms CPU time, 165M memory peak, 8.6M read from disk, 171.3M written to disk. Jan 15 00:32:02.762000 audit: BPF prog-id=171 op=UNLOAD Jan 15 00:32:02.853601 kubelet[2785]: I0115 00:32:02.852399 2785 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 15 00:32:02.864615 containerd[1616]: time="2026-01-15T00:32:02.864545359Z" level=info msg="received container exit event container_id:\"35b5241b148f760174d31268a1fcf59211df6a470298a33cec532ca6fa3f5787\" id:\"35b5241b148f760174d31268a1fcf59211df6a470298a33cec532ca6fa3f5787\" pid:3504 exited_at:{seconds:1768437122 nanos:855197654}" Jan 15 00:32:02.975250 systemd[1]: Created slice kubepods-burstable-pod5e2ebb1c_cdf8_4c57_934e_7ae859fc7427.slice - libcontainer container kubepods-burstable-pod5e2ebb1c_cdf8_4c57_934e_7ae859fc7427.slice. Jan 15 00:32:03.003028 systemd[1]: Created slice kubepods-burstable-podbd9fdd13_b944_49f0_8efe_4c6c4031a849.slice - libcontainer container kubepods-burstable-podbd9fdd13_b944_49f0_8efe_4c6c4031a849.slice. Jan 15 00:32:03.035765 systemd[1]: Created slice kubepods-besteffort-pod5adbdfdd_96a2_41eb_8663_7460bd3865b9.slice - libcontainer container kubepods-besteffort-pod5adbdfdd_96a2_41eb_8663_7460bd3865b9.slice. Jan 15 00:32:03.073672 systemd[1]: Created slice kubepods-besteffort-pod3b2df0f5_3af7_40bf_8f6e_f5e8397900ad.slice - libcontainer container kubepods-besteffort-pod3b2df0f5_3af7_40bf_8f6e_f5e8397900ad.slice. Jan 15 00:32:03.098142 systemd[1]: Created slice kubepods-besteffort-podb432d05d_ed71_4758_b9af_7738bf34afb7.slice - libcontainer container kubepods-besteffort-podb432d05d_ed71_4758_b9af_7738bf34afb7.slice. Jan 15 00:32:03.099523 kubelet[2785]: I0115 00:32:03.099487 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bd9fdd13-b944-49f0-8efe-4c6c4031a849-config-volume\") pod \"coredns-668d6bf9bc-cxd6l\" (UID: \"bd9fdd13-b944-49f0-8efe-4c6c4031a849\") " pod="kube-system/coredns-668d6bf9bc-cxd6l" Jan 15 00:32:03.100189 kubelet[2785]: I0115 00:32:03.099601 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tl2m7\" (UniqueName: \"kubernetes.io/projected/b432d05d-ed71-4758-b9af-7738bf34afb7-kube-api-access-tl2m7\") pod \"calico-apiserver-586c796f68-gf7fx\" (UID: \"b432d05d-ed71-4758-b9af-7738bf34afb7\") " pod="calico-apiserver/calico-apiserver-586c796f68-gf7fx" Jan 15 00:32:03.100189 kubelet[2785]: I0115 00:32:03.099646 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8d7lk\" (UniqueName: \"kubernetes.io/projected/3b2df0f5-3af7-40bf-8f6e-f5e8397900ad-kube-api-access-8d7lk\") pod \"calico-apiserver-586c796f68-7pr9q\" (UID: \"3b2df0f5-3af7-40bf-8f6e-f5e8397900ad\") " pod="calico-apiserver/calico-apiserver-586c796f68-7pr9q" Jan 15 00:32:03.100189 kubelet[2785]: I0115 00:32:03.099680 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3b2df0f5-3af7-40bf-8f6e-f5e8397900ad-calico-apiserver-certs\") pod \"calico-apiserver-586c796f68-7pr9q\" (UID: \"3b2df0f5-3af7-40bf-8f6e-f5e8397900ad\") " pod="calico-apiserver/calico-apiserver-586c796f68-7pr9q" Jan 15 00:32:03.100189 kubelet[2785]: I0115 00:32:03.099710 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5e2ebb1c-cdf8-4c57-934e-7ae859fc7427-config-volume\") pod \"coredns-668d6bf9bc-n8h7v\" (UID: \"5e2ebb1c-cdf8-4c57-934e-7ae859fc7427\") " pod="kube-system/coredns-668d6bf9bc-n8h7v" Jan 15 00:32:03.100189 kubelet[2785]: I0115 00:32:03.099744 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlhkh\" (UniqueName: \"kubernetes.io/projected/5e2ebb1c-cdf8-4c57-934e-7ae859fc7427-kube-api-access-zlhkh\") pod \"coredns-668d6bf9bc-n8h7v\" (UID: \"5e2ebb1c-cdf8-4c57-934e-7ae859fc7427\") " pod="kube-system/coredns-668d6bf9bc-n8h7v" Jan 15 00:32:03.100379 kubelet[2785]: I0115 00:32:03.099769 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5adbdfdd-96a2-41eb-8663-7460bd3865b9-tigera-ca-bundle\") pod \"calico-kube-controllers-7d4f97847b-lrvs5\" (UID: \"5adbdfdd-96a2-41eb-8663-7460bd3865b9\") " pod="calico-system/calico-kube-controllers-7d4f97847b-lrvs5" Jan 15 00:32:03.100379 kubelet[2785]: I0115 00:32:03.099785 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ts2lb\" (UniqueName: \"kubernetes.io/projected/5adbdfdd-96a2-41eb-8663-7460bd3865b9-kube-api-access-ts2lb\") pod \"calico-kube-controllers-7d4f97847b-lrvs5\" (UID: \"5adbdfdd-96a2-41eb-8663-7460bd3865b9\") " pod="calico-system/calico-kube-controllers-7d4f97847b-lrvs5" Jan 15 00:32:03.100379 kubelet[2785]: I0115 00:32:03.099805 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zw6jt\" (UniqueName: \"kubernetes.io/projected/bd9fdd13-b944-49f0-8efe-4c6c4031a849-kube-api-access-zw6jt\") pod \"coredns-668d6bf9bc-cxd6l\" (UID: \"bd9fdd13-b944-49f0-8efe-4c6c4031a849\") " pod="kube-system/coredns-668d6bf9bc-cxd6l" Jan 15 00:32:03.100379 kubelet[2785]: I0115 00:32:03.099831 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b432d05d-ed71-4758-b9af-7738bf34afb7-calico-apiserver-certs\") pod \"calico-apiserver-586c796f68-gf7fx\" (UID: \"b432d05d-ed71-4758-b9af-7738bf34afb7\") " pod="calico-apiserver/calico-apiserver-586c796f68-gf7fx" Jan 15 00:32:03.126234 systemd[1]: Created slice kubepods-besteffort-pod14ced92d_cf89_41f0_99bf_edc9c92a737b.slice - libcontainer container kubepods-besteffort-pod14ced92d_cf89_41f0_99bf_edc9c92a737b.slice. Jan 15 00:32:03.138308 containerd[1616]: time="2026-01-15T00:32:03.137270169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rjlcz,Uid:14ced92d-cf89-41f0-99bf-edc9c92a737b,Namespace:calico-system,Attempt:0,}" Jan 15 00:32:03.138819 systemd[1]: Created slice kubepods-besteffort-pod388c3954_9ce8_4280_a01f_92fddc826177.slice - libcontainer container kubepods-besteffort-pod388c3954_9ce8_4280_a01f_92fddc826177.slice. Jan 15 00:32:03.156318 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-35b5241b148f760174d31268a1fcf59211df6a470298a33cec532ca6fa3f5787-rootfs.mount: Deactivated successfully. Jan 15 00:32:03.186638 systemd[1]: Created slice kubepods-besteffort-poda6d2aaa6_9d35_4f8a_99b8_b75c10539cd4.slice - libcontainer container kubepods-besteffort-poda6d2aaa6_9d35_4f8a_99b8_b75c10539cd4.slice. Jan 15 00:32:03.201836 kubelet[2785]: I0115 00:32:03.201004 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpqkw\" (UniqueName: \"kubernetes.io/projected/a6d2aaa6-9d35-4f8a-99b8-b75c10539cd4-kube-api-access-mpqkw\") pod \"goldmane-666569f655-fmmn9\" (UID: \"a6d2aaa6-9d35-4f8a-99b8-b75c10539cd4\") " pod="calico-system/goldmane-666569f655-fmmn9" Jan 15 00:32:03.202241 kubelet[2785]: I0115 00:32:03.202202 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txsqm\" (UniqueName: \"kubernetes.io/projected/388c3954-9ce8-4280-a01f-92fddc826177-kube-api-access-txsqm\") pod \"whisker-855d8fc489-mbtbf\" (UID: \"388c3954-9ce8-4280-a01f-92fddc826177\") " pod="calico-system/whisker-855d8fc489-mbtbf" Jan 15 00:32:03.202410 kubelet[2785]: I0115 00:32:03.202396 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a6d2aaa6-9d35-4f8a-99b8-b75c10539cd4-goldmane-ca-bundle\") pod \"goldmane-666569f655-fmmn9\" (UID: \"a6d2aaa6-9d35-4f8a-99b8-b75c10539cd4\") " pod="calico-system/goldmane-666569f655-fmmn9" Jan 15 00:32:03.202495 kubelet[2785]: I0115 00:32:03.202484 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/388c3954-9ce8-4280-a01f-92fddc826177-whisker-ca-bundle\") pod \"whisker-855d8fc489-mbtbf\" (UID: \"388c3954-9ce8-4280-a01f-92fddc826177\") " pod="calico-system/whisker-855d8fc489-mbtbf" Jan 15 00:32:03.207281 kubelet[2785]: I0115 00:32:03.207231 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/a6d2aaa6-9d35-4f8a-99b8-b75c10539cd4-goldmane-key-pair\") pod \"goldmane-666569f655-fmmn9\" (UID: \"a6d2aaa6-9d35-4f8a-99b8-b75c10539cd4\") " pod="calico-system/goldmane-666569f655-fmmn9" Jan 15 00:32:03.210181 kubelet[2785]: I0115 00:32:03.207590 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/388c3954-9ce8-4280-a01f-92fddc826177-whisker-backend-key-pair\") pod \"whisker-855d8fc489-mbtbf\" (UID: \"388c3954-9ce8-4280-a01f-92fddc826177\") " pod="calico-system/whisker-855d8fc489-mbtbf" Jan 15 00:32:03.210181 kubelet[2785]: I0115 00:32:03.209197 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6d2aaa6-9d35-4f8a-99b8-b75c10539cd4-config\") pod \"goldmane-666569f655-fmmn9\" (UID: \"a6d2aaa6-9d35-4f8a-99b8-b75c10539cd4\") " pod="calico-system/goldmane-666569f655-fmmn9" Jan 15 00:32:03.313557 kubelet[2785]: E0115 00:32:03.313307 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 15 00:32:03.326507 kubelet[2785]: E0115 00:32:03.326461 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 15 00:32:03.327788 containerd[1616]: time="2026-01-15T00:32:03.327735658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cxd6l,Uid:bd9fdd13-b944-49f0-8efe-4c6c4031a849,Namespace:kube-system,Attempt:0,}" Jan 15 00:32:03.329408 containerd[1616]: time="2026-01-15T00:32:03.329284118Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 15 00:32:03.364600 containerd[1616]: time="2026-01-15T00:32:03.364541218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d4f97847b-lrvs5,Uid:5adbdfdd-96a2-41eb-8663-7460bd3865b9,Namespace:calico-system,Attempt:0,}" Jan 15 00:32:03.383130 containerd[1616]: time="2026-01-15T00:32:03.382703202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-586c796f68-7pr9q,Uid:3b2df0f5-3af7-40bf-8f6e-f5e8397900ad,Namespace:calico-apiserver,Attempt:0,}" Jan 15 00:32:03.416304 containerd[1616]: time="2026-01-15T00:32:03.416241055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-586c796f68-gf7fx,Uid:b432d05d-ed71-4758-b9af-7738bf34afb7,Namespace:calico-apiserver,Attempt:0,}" Jan 15 00:32:03.474486 containerd[1616]: time="2026-01-15T00:32:03.474416303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-855d8fc489-mbtbf,Uid:388c3954-9ce8-4280-a01f-92fddc826177,Namespace:calico-system,Attempt:0,}" Jan 15 00:32:03.504953 containerd[1616]: time="2026-01-15T00:32:03.504425445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-fmmn9,Uid:a6d2aaa6-9d35-4f8a-99b8-b75c10539cd4,Namespace:calico-system,Attempt:0,}" Jan 15 00:32:03.594167 kubelet[2785]: E0115 00:32:03.593591 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 15 00:32:03.595438 containerd[1616]: time="2026-01-15T00:32:03.594988878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-n8h7v,Uid:5e2ebb1c-cdf8-4c57-934e-7ae859fc7427,Namespace:kube-system,Attempt:0,}" Jan 15 00:32:03.871463 containerd[1616]: time="2026-01-15T00:32:03.871257349Z" level=error msg="Failed to destroy network for sandbox \"cb065e41b21cfbff9a488c35eb6c9327624c2ee8009d093a0b786b2870e95820\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:32:03.875561 containerd[1616]: time="2026-01-15T00:32:03.875486196Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-855d8fc489-mbtbf,Uid:388c3954-9ce8-4280-a01f-92fddc826177,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb065e41b21cfbff9a488c35eb6c9327624c2ee8009d093a0b786b2870e95820\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:32:03.886436 kubelet[2785]: E0115 00:32:03.885890 2785 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb065e41b21cfbff9a488c35eb6c9327624c2ee8009d093a0b786b2870e95820\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:32:03.892159 kubelet[2785]: E0115 00:32:03.891986 2785 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb065e41b21cfbff9a488c35eb6c9327624c2ee8009d093a0b786b2870e95820\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-855d8fc489-mbtbf" Jan 15 00:32:03.892159 kubelet[2785]: E0115 00:32:03.892068 2785 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb065e41b21cfbff9a488c35eb6c9327624c2ee8009d093a0b786b2870e95820\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-855d8fc489-mbtbf" Jan 15 00:32:03.893031 containerd[1616]: time="2026-01-15T00:32:03.892966770Z" level=error msg="Failed to destroy network for sandbox \"192328f90f1877d4214c3412c00922439c647c7f460a807070e2f9da5c753740\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:32:03.900568 kubelet[2785]: E0115 00:32:03.900372 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-855d8fc489-mbtbf_calico-system(388c3954-9ce8-4280-a01f-92fddc826177)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-855d8fc489-mbtbf_calico-system(388c3954-9ce8-4280-a01f-92fddc826177)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cb065e41b21cfbff9a488c35eb6c9327624c2ee8009d093a0b786b2870e95820\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-855d8fc489-mbtbf" podUID="388c3954-9ce8-4280-a01f-92fddc826177" Jan 15 00:32:03.908828 containerd[1616]: time="2026-01-15T00:32:03.908627187Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cxd6l,Uid:bd9fdd13-b944-49f0-8efe-4c6c4031a849,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"192328f90f1877d4214c3412c00922439c647c7f460a807070e2f9da5c753740\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:32:03.909253 kubelet[2785]: E0115 00:32:03.908970 2785 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"192328f90f1877d4214c3412c00922439c647c7f460a807070e2f9da5c753740\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:32:03.909253 kubelet[2785]: E0115 00:32:03.909057 2785 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"192328f90f1877d4214c3412c00922439c647c7f460a807070e2f9da5c753740\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-cxd6l" Jan 15 00:32:03.909253 kubelet[2785]: E0115 00:32:03.909082 2785 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"192328f90f1877d4214c3412c00922439c647c7f460a807070e2f9da5c753740\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-cxd6l" Jan 15 00:32:03.909850 kubelet[2785]: E0115 00:32:03.909136 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-cxd6l_kube-system(bd9fdd13-b944-49f0-8efe-4c6c4031a849)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-cxd6l_kube-system(bd9fdd13-b944-49f0-8efe-4c6c4031a849)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"192328f90f1877d4214c3412c00922439c647c7f460a807070e2f9da5c753740\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-cxd6l" podUID="bd9fdd13-b944-49f0-8efe-4c6c4031a849" Jan 15 00:32:03.915149 containerd[1616]: time="2026-01-15T00:32:03.914948314Z" level=error msg="Failed to destroy network for sandbox \"aa123ca41303ad7574c09f167247ab2cb59e50403cfc77e01f1f0f96de2b9b47\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:32:03.919803 containerd[1616]: time="2026-01-15T00:32:03.919641208Z" level=error msg="Failed to destroy network for sandbox \"0f79cfbadf64f50bddd7d56ec8971a6a72846e3ae8a4aad3d08d3751545cf818\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:32:03.923430 containerd[1616]: time="2026-01-15T00:32:03.923283995Z" level=error msg="Failed to destroy network for sandbox \"d7bde7ad939798dbeb70077c84a88b8cd771cb067dea065a35e9b11f06aa1af4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:32:03.923727 containerd[1616]: time="2026-01-15T00:32:03.923679614Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d4f97847b-lrvs5,Uid:5adbdfdd-96a2-41eb-8663-7460bd3865b9,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f79cfbadf64f50bddd7d56ec8971a6a72846e3ae8a4aad3d08d3751545cf818\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:32:03.924256 kubelet[2785]: E0115 00:32:03.924006 2785 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f79cfbadf64f50bddd7d56ec8971a6a72846e3ae8a4aad3d08d3751545cf818\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:32:03.924585 kubelet[2785]: E0115 00:32:03.924338 2785 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f79cfbadf64f50bddd7d56ec8971a6a72846e3ae8a4aad3d08d3751545cf818\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7d4f97847b-lrvs5" Jan 15 00:32:03.924585 kubelet[2785]: E0115 00:32:03.924465 2785 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f79cfbadf64f50bddd7d56ec8971a6a72846e3ae8a4aad3d08d3751545cf818\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7d4f97847b-lrvs5" Jan 15 00:32:03.925541 kubelet[2785]: E0115 00:32:03.925228 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7d4f97847b-lrvs5_calico-system(5adbdfdd-96a2-41eb-8663-7460bd3865b9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7d4f97847b-lrvs5_calico-system(5adbdfdd-96a2-41eb-8663-7460bd3865b9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0f79cfbadf64f50bddd7d56ec8971a6a72846e3ae8a4aad3d08d3751545cf818\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7d4f97847b-lrvs5" podUID="5adbdfdd-96a2-41eb-8663-7460bd3865b9" Jan 15 00:32:03.926562 containerd[1616]: time="2026-01-15T00:32:03.926429374Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rjlcz,Uid:14ced92d-cf89-41f0-99bf-edc9c92a737b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa123ca41303ad7574c09f167247ab2cb59e50403cfc77e01f1f0f96de2b9b47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:32:03.926701 kubelet[2785]: E0115 00:32:03.926650 2785 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa123ca41303ad7574c09f167247ab2cb59e50403cfc77e01f1f0f96de2b9b47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:32:03.926767 kubelet[2785]: E0115 00:32:03.926699 2785 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa123ca41303ad7574c09f167247ab2cb59e50403cfc77e01f1f0f96de2b9b47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rjlcz" Jan 15 00:32:03.926767 kubelet[2785]: E0115 00:32:03.926721 2785 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa123ca41303ad7574c09f167247ab2cb59e50403cfc77e01f1f0f96de2b9b47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rjlcz" Jan 15 00:32:03.926875 kubelet[2785]: E0115 00:32:03.926759 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rjlcz_calico-system(14ced92d-cf89-41f0-99bf-edc9c92a737b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rjlcz_calico-system(14ced92d-cf89-41f0-99bf-edc9c92a737b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aa123ca41303ad7574c09f167247ab2cb59e50403cfc77e01f1f0f96de2b9b47\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rjlcz" podUID="14ced92d-cf89-41f0-99bf-edc9c92a737b" Jan 15 00:32:03.929551 containerd[1616]: time="2026-01-15T00:32:03.929092709Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-586c796f68-7pr9q,Uid:3b2df0f5-3af7-40bf-8f6e-f5e8397900ad,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7bde7ad939798dbeb70077c84a88b8cd771cb067dea065a35e9b11f06aa1af4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:32:03.929729 kubelet[2785]: E0115 00:32:03.929398 2785 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7bde7ad939798dbeb70077c84a88b8cd771cb067dea065a35e9b11f06aa1af4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:32:03.930153 kubelet[2785]: E0115 00:32:03.929816 2785 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7bde7ad939798dbeb70077c84a88b8cd771cb067dea065a35e9b11f06aa1af4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-586c796f68-7pr9q" Jan 15 00:32:03.930153 kubelet[2785]: E0115 00:32:03.929869 2785 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7bde7ad939798dbeb70077c84a88b8cd771cb067dea065a35e9b11f06aa1af4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-586c796f68-7pr9q" Jan 15 00:32:03.930153 kubelet[2785]: E0115 00:32:03.930101 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-586c796f68-7pr9q_calico-apiserver(3b2df0f5-3af7-40bf-8f6e-f5e8397900ad)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-586c796f68-7pr9q_calico-apiserver(3b2df0f5-3af7-40bf-8f6e-f5e8397900ad)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d7bde7ad939798dbeb70077c84a88b8cd771cb067dea065a35e9b11f06aa1af4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-586c796f68-7pr9q" podUID="3b2df0f5-3af7-40bf-8f6e-f5e8397900ad" Jan 15 00:32:03.949916 containerd[1616]: time="2026-01-15T00:32:03.949784663Z" level=error msg="Failed to destroy network for sandbox \"3d70c0b5ae3faeda28a325aac2859f4dab886501150839dbfddcf2fc157e185e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:32:03.953118 containerd[1616]: time="2026-01-15T00:32:03.952789239Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-fmmn9,Uid:a6d2aaa6-9d35-4f8a-99b8-b75c10539cd4,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d70c0b5ae3faeda28a325aac2859f4dab886501150839dbfddcf2fc157e185e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:32:03.953494 kubelet[2785]: E0115 00:32:03.953134 2785 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d70c0b5ae3faeda28a325aac2859f4dab886501150839dbfddcf2fc157e185e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:32:03.953494 kubelet[2785]: E0115 00:32:03.953222 2785 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d70c0b5ae3faeda28a325aac2859f4dab886501150839dbfddcf2fc157e185e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-fmmn9" Jan 15 00:32:03.953494 kubelet[2785]: E0115 00:32:03.953244 2785 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d70c0b5ae3faeda28a325aac2859f4dab886501150839dbfddcf2fc157e185e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-fmmn9" Jan 15 00:32:03.953864 kubelet[2785]: E0115 00:32:03.953319 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-fmmn9_calico-system(a6d2aaa6-9d35-4f8a-99b8-b75c10539cd4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-fmmn9_calico-system(a6d2aaa6-9d35-4f8a-99b8-b75c10539cd4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3d70c0b5ae3faeda28a325aac2859f4dab886501150839dbfddcf2fc157e185e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-fmmn9" podUID="a6d2aaa6-9d35-4f8a-99b8-b75c10539cd4" Jan 15 00:32:03.964050 containerd[1616]: time="2026-01-15T00:32:03.963920198Z" level=error msg="Failed to destroy network for sandbox \"86c4be728eb1ae55a39cb9c1fdfc60e4300ce6c1ce8aa301ebc1d528494f55de\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:32:03.966409 containerd[1616]: time="2026-01-15T00:32:03.966329978Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-586c796f68-gf7fx,Uid:b432d05d-ed71-4758-b9af-7738bf34afb7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"86c4be728eb1ae55a39cb9c1fdfc60e4300ce6c1ce8aa301ebc1d528494f55de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:32:03.966998 kubelet[2785]: E0115 00:32:03.966960 2785 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86c4be728eb1ae55a39cb9c1fdfc60e4300ce6c1ce8aa301ebc1d528494f55de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:32:03.967551 kubelet[2785]: E0115 00:32:03.967236 2785 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86c4be728eb1ae55a39cb9c1fdfc60e4300ce6c1ce8aa301ebc1d528494f55de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-586c796f68-gf7fx" Jan 15 00:32:03.967551 kubelet[2785]: E0115 00:32:03.967435 2785 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86c4be728eb1ae55a39cb9c1fdfc60e4300ce6c1ce8aa301ebc1d528494f55de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-586c796f68-gf7fx" Jan 15 00:32:03.969244 kubelet[2785]: E0115 00:32:03.969006 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-586c796f68-gf7fx_calico-apiserver(b432d05d-ed71-4758-b9af-7738bf34afb7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-586c796f68-gf7fx_calico-apiserver(b432d05d-ed71-4758-b9af-7738bf34afb7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"86c4be728eb1ae55a39cb9c1fdfc60e4300ce6c1ce8aa301ebc1d528494f55de\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-586c796f68-gf7fx" podUID="b432d05d-ed71-4758-b9af-7738bf34afb7" Jan 15 00:32:03.978432 containerd[1616]: time="2026-01-15T00:32:03.978339381Z" level=error msg="Failed to destroy network for sandbox \"6b8820078eec303e692dc9a86de6db794ac50f6eaddfb02973c3f28785509863\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:32:03.980778 containerd[1616]: time="2026-01-15T00:32:03.980706911Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-n8h7v,Uid:5e2ebb1c-cdf8-4c57-934e-7ae859fc7427,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b8820078eec303e692dc9a86de6db794ac50f6eaddfb02973c3f28785509863\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:32:03.981741 kubelet[2785]: E0115 00:32:03.981685 2785 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b8820078eec303e692dc9a86de6db794ac50f6eaddfb02973c3f28785509863\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:32:03.981946 kubelet[2785]: E0115 00:32:03.981772 2785 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b8820078eec303e692dc9a86de6db794ac50f6eaddfb02973c3f28785509863\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-n8h7v" Jan 15 00:32:03.981946 kubelet[2785]: E0115 00:32:03.981807 2785 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b8820078eec303e692dc9a86de6db794ac50f6eaddfb02973c3f28785509863\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-n8h7v" Jan 15 00:32:03.981946 kubelet[2785]: E0115 00:32:03.981894 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-n8h7v_kube-system(5e2ebb1c-cdf8-4c57-934e-7ae859fc7427)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-n8h7v_kube-system(5e2ebb1c-cdf8-4c57-934e-7ae859fc7427)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6b8820078eec303e692dc9a86de6db794ac50f6eaddfb02973c3f28785509863\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-n8h7v" podUID="5e2ebb1c-cdf8-4c57-934e-7ae859fc7427" Jan 15 00:32:04.159798 systemd[1]: run-netns-cni\x2df57cef37\x2d292e\x2d10a5\x2d26d2\x2dbc77bbb54a3e.mount: Deactivated successfully. Jan 15 00:32:11.758839 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount978696353.mount: Deactivated successfully. Jan 15 00:32:11.830920 containerd[1616]: time="2026-01-15T00:32:11.807785565Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:32:11.832950 containerd[1616]: time="2026-01-15T00:32:11.831861519Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Jan 15 00:32:11.850719 containerd[1616]: time="2026-01-15T00:32:11.850654687Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:32:11.853840 containerd[1616]: time="2026-01-15T00:32:11.853632963Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:32:11.854492 containerd[1616]: time="2026-01-15T00:32:11.854106886Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 8.524757015s" Jan 15 00:32:11.854492 containerd[1616]: time="2026-01-15T00:32:11.854139728Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 15 00:32:11.880889 containerd[1616]: time="2026-01-15T00:32:11.880585221Z" level=info msg="CreateContainer within sandbox \"4ff8348dd91ce219d46462ed5a2906a819e0521d575d304c03ee4c1d11f340a6\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 15 00:32:11.931559 containerd[1616]: time="2026-01-15T00:32:11.931314448Z" level=info msg="Container f750eb421e77d5e8ceb028f4f3015235f333163366321dc3f6b48e7b88e65d7f: CDI devices from CRI Config.CDIDevices: []" Jan 15 00:32:11.935738 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1699579668.mount: Deactivated successfully. Jan 15 00:32:11.980362 containerd[1616]: time="2026-01-15T00:32:11.980298708Z" level=info msg="CreateContainer within sandbox \"4ff8348dd91ce219d46462ed5a2906a819e0521d575d304c03ee4c1d11f340a6\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f750eb421e77d5e8ceb028f4f3015235f333163366321dc3f6b48e7b88e65d7f\"" Jan 15 00:32:11.981209 containerd[1616]: time="2026-01-15T00:32:11.981122341Z" level=info msg="StartContainer for \"f750eb421e77d5e8ceb028f4f3015235f333163366321dc3f6b48e7b88e65d7f\"" Jan 15 00:32:11.988052 containerd[1616]: time="2026-01-15T00:32:11.986921672Z" level=info msg="connecting to shim f750eb421e77d5e8ceb028f4f3015235f333163366321dc3f6b48e7b88e65d7f" address="unix:///run/containerd/s/b2aac25a5be3a1564bfe713a18d86619f160fb47856605f26ea1582225a381ec" protocol=ttrpc version=3 Jan 15 00:32:12.129648 systemd[1]: Started cri-containerd-f750eb421e77d5e8ceb028f4f3015235f333163366321dc3f6b48e7b88e65d7f.scope - libcontainer container f750eb421e77d5e8ceb028f4f3015235f333163366321dc3f6b48e7b88e65d7f. Jan 15 00:32:12.227000 audit: BPF prog-id=172 op=LOAD Jan 15 00:32:12.229626 kernel: kauditd_printk_skb: 34 callbacks suppressed Jan 15 00:32:12.230158 kernel: audit: type=1334 audit(1768437132.227:580): prog-id=172 op=LOAD Jan 15 00:32:12.227000 audit[3759]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000112488 a2=98 a3=0 items=0 ppid=3285 pid=3759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:12.233120 kernel: audit: type=1300 audit(1768437132.227:580): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000112488 a2=98 a3=0 items=0 ppid=3285 pid=3759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:12.227000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637353065623432316537376435653863656230323866346633303135 Jan 15 00:32:12.241107 kernel: audit: type=1327 audit(1768437132.227:580): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637353065623432316537376435653863656230323866346633303135 Jan 15 00:32:12.241773 kernel: audit: type=1334 audit(1768437132.228:581): prog-id=173 op=LOAD Jan 15 00:32:12.228000 audit: BPF prog-id=173 op=LOAD Jan 15 00:32:12.228000 audit[3759]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000112218 a2=98 a3=0 items=0 ppid=3285 pid=3759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:12.244893 kernel: audit: type=1300 audit(1768437132.228:581): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000112218 a2=98 a3=0 items=0 ppid=3285 pid=3759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:12.228000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637353065623432316537376435653863656230323866346633303135 Jan 15 00:32:12.251066 kernel: audit: type=1327 audit(1768437132.228:581): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637353065623432316537376435653863656230323866346633303135 Jan 15 00:32:12.228000 audit: BPF prog-id=173 op=UNLOAD Jan 15 00:32:12.258151 kernel: audit: type=1334 audit(1768437132.228:582): prog-id=173 op=UNLOAD Jan 15 00:32:12.228000 audit[3759]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3285 pid=3759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:12.265076 kernel: audit: type=1300 audit(1768437132.228:582): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3285 pid=3759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:12.265899 kernel: audit: type=1327 audit(1768437132.228:582): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637353065623432316537376435653863656230323866346633303135 Jan 15 00:32:12.228000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637353065623432316537376435653863656230323866346633303135 Jan 15 00:32:12.228000 audit: BPF prog-id=172 op=UNLOAD Jan 15 00:32:12.275047 kernel: audit: type=1334 audit(1768437132.228:583): prog-id=172 op=UNLOAD Jan 15 00:32:12.228000 audit[3759]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3285 pid=3759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:12.228000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637353065623432316537376435653863656230323866346633303135 Jan 15 00:32:12.228000 audit: BPF prog-id=174 op=LOAD Jan 15 00:32:12.228000 audit[3759]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001126e8 a2=98 a3=0 items=0 ppid=3285 pid=3759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:12.228000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637353065623432316537376435653863656230323866346633303135 Jan 15 00:32:12.319246 containerd[1616]: time="2026-01-15T00:32:12.319178821Z" level=info msg="StartContainer for \"f750eb421e77d5e8ceb028f4f3015235f333163366321dc3f6b48e7b88e65d7f\" returns successfully" Jan 15 00:32:12.426942 kubelet[2785]: E0115 00:32:12.426532 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 15 00:32:12.527740 kubelet[2785]: I0115 00:32:12.523636 2785 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-mq72d" podStartSLOduration=2.723362598 podStartE2EDuration="20.523606062s" podCreationTimestamp="2026-01-15 00:31:52 +0000 UTC" firstStartedPulling="2026-01-15 00:31:54.055586309 +0000 UTC m=+25.271849555" lastFinishedPulling="2026-01-15 00:32:11.855829747 +0000 UTC m=+43.072093019" observedRunningTime="2026-01-15 00:32:12.4994549 +0000 UTC m=+43.715718166" watchObservedRunningTime="2026-01-15 00:32:12.523606062 +0000 UTC m=+43.739869335" Jan 15 00:32:12.551895 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 15 00:32:12.552048 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 15 00:32:12.987628 kubelet[2785]: I0115 00:32:12.987185 2785 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txsqm\" (UniqueName: \"kubernetes.io/projected/388c3954-9ce8-4280-a01f-92fddc826177-kube-api-access-txsqm\") pod \"388c3954-9ce8-4280-a01f-92fddc826177\" (UID: \"388c3954-9ce8-4280-a01f-92fddc826177\") " Jan 15 00:32:12.987628 kubelet[2785]: I0115 00:32:12.987261 2785 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/388c3954-9ce8-4280-a01f-92fddc826177-whisker-backend-key-pair\") pod \"388c3954-9ce8-4280-a01f-92fddc826177\" (UID: \"388c3954-9ce8-4280-a01f-92fddc826177\") " Jan 15 00:32:12.987628 kubelet[2785]: I0115 00:32:12.987284 2785 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/388c3954-9ce8-4280-a01f-92fddc826177-whisker-ca-bundle\") pod \"388c3954-9ce8-4280-a01f-92fddc826177\" (UID: \"388c3954-9ce8-4280-a01f-92fddc826177\") " Jan 15 00:32:12.990583 kubelet[2785]: I0115 00:32:12.990534 2785 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/388c3954-9ce8-4280-a01f-92fddc826177-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "388c3954-9ce8-4280-a01f-92fddc826177" (UID: "388c3954-9ce8-4280-a01f-92fddc826177"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 15 00:32:13.016208 systemd[1]: var-lib-kubelet-pods-388c3954\x2d9ce8\x2d4280\x2da01f\x2d92fddc826177-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 15 00:32:13.017339 kubelet[2785]: I0115 00:32:13.016997 2785 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/388c3954-9ce8-4280-a01f-92fddc826177-kube-api-access-txsqm" (OuterVolumeSpecName: "kube-api-access-txsqm") pod "388c3954-9ce8-4280-a01f-92fddc826177" (UID: "388c3954-9ce8-4280-a01f-92fddc826177"). InnerVolumeSpecName "kube-api-access-txsqm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 15 00:32:13.017339 kubelet[2785]: I0115 00:32:13.017246 2785 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/388c3954-9ce8-4280-a01f-92fddc826177-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "388c3954-9ce8-4280-a01f-92fddc826177" (UID: "388c3954-9ce8-4280-a01f-92fddc826177"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 15 00:32:13.026641 systemd[1]: var-lib-kubelet-pods-388c3954\x2d9ce8\x2d4280\x2da01f\x2d92fddc826177-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtxsqm.mount: Deactivated successfully. Jan 15 00:32:13.088125 kubelet[2785]: I0115 00:32:13.087926 2785 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/388c3954-9ce8-4280-a01f-92fddc826177-whisker-backend-key-pair\") on node \"ci-4515.1.0-n-4ecc98c3fd\" DevicePath \"\"" Jan 15 00:32:13.088125 kubelet[2785]: I0115 00:32:13.087977 2785 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/388c3954-9ce8-4280-a01f-92fddc826177-whisker-ca-bundle\") on node \"ci-4515.1.0-n-4ecc98c3fd\" DevicePath \"\"" Jan 15 00:32:13.088125 kubelet[2785]: I0115 00:32:13.087996 2785 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-txsqm\" (UniqueName: \"kubernetes.io/projected/388c3954-9ce8-4280-a01f-92fddc826177-kube-api-access-txsqm\") on node \"ci-4515.1.0-n-4ecc98c3fd\" DevicePath \"\"" Jan 15 00:32:13.433217 kubelet[2785]: E0115 00:32:13.431987 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 15 00:32:13.436726 systemd[1]: Removed slice kubepods-besteffort-pod388c3954_9ce8_4280_a01f_92fddc826177.slice - libcontainer container kubepods-besteffort-pod388c3954_9ce8_4280_a01f_92fddc826177.slice. Jan 15 00:32:13.542740 systemd[1]: Created slice kubepods-besteffort-poda4b16ce3_dca7_42f9_90d7_10ddcc6423d9.slice - libcontainer container kubepods-besteffort-poda4b16ce3_dca7_42f9_90d7_10ddcc6423d9.slice. Jan 15 00:32:13.693373 kubelet[2785]: I0115 00:32:13.693113 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a4b16ce3-dca7-42f9-90d7-10ddcc6423d9-whisker-backend-key-pair\") pod \"whisker-7fdcf9f989-nm8zk\" (UID: \"a4b16ce3-dca7-42f9-90d7-10ddcc6423d9\") " pod="calico-system/whisker-7fdcf9f989-nm8zk" Jan 15 00:32:13.693698 kubelet[2785]: I0115 00:32:13.693568 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4b16ce3-dca7-42f9-90d7-10ddcc6423d9-whisker-ca-bundle\") pod \"whisker-7fdcf9f989-nm8zk\" (UID: \"a4b16ce3-dca7-42f9-90d7-10ddcc6423d9\") " pod="calico-system/whisker-7fdcf9f989-nm8zk" Jan 15 00:32:13.693698 kubelet[2785]: I0115 00:32:13.693618 2785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xp2m5\" (UniqueName: \"kubernetes.io/projected/a4b16ce3-dca7-42f9-90d7-10ddcc6423d9-kube-api-access-xp2m5\") pod \"whisker-7fdcf9f989-nm8zk\" (UID: \"a4b16ce3-dca7-42f9-90d7-10ddcc6423d9\") " pod="calico-system/whisker-7fdcf9f989-nm8zk" Jan 15 00:32:13.849585 containerd[1616]: time="2026-01-15T00:32:13.849504012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7fdcf9f989-nm8zk,Uid:a4b16ce3-dca7-42f9-90d7-10ddcc6423d9,Namespace:calico-system,Attempt:0,}" Jan 15 00:32:14.258385 systemd-networkd[1518]: cali093a1d1a87f: Link UP Jan 15 00:32:14.258630 systemd-networkd[1518]: cali093a1d1a87f: Gained carrier Jan 15 00:32:14.305636 containerd[1616]: 2026-01-15 00:32:13.912 [INFO][3878] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 15 00:32:14.305636 containerd[1616]: 2026-01-15 00:32:13.944 [INFO][3878] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--n--4ecc98c3fd-k8s-whisker--7fdcf9f989--nm8zk-eth0 whisker-7fdcf9f989- calico-system a4b16ce3-dca7-42f9-90d7-10ddcc6423d9 967 0 2026-01-15 00:32:13 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7fdcf9f989 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4515.1.0-n-4ecc98c3fd whisker-7fdcf9f989-nm8zk eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali093a1d1a87f [] [] }} ContainerID="eb3add65ef3f45f49dd4902547c08b2d60c1eb3535f72b9606dfd293ec31f79e" Namespace="calico-system" Pod="whisker-7fdcf9f989-nm8zk" WorkloadEndpoint="ci--4515.1.0--n--4ecc98c3fd-k8s-whisker--7fdcf9f989--nm8zk-" Jan 15 00:32:14.305636 containerd[1616]: 2026-01-15 00:32:13.944 [INFO][3878] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="eb3add65ef3f45f49dd4902547c08b2d60c1eb3535f72b9606dfd293ec31f79e" Namespace="calico-system" Pod="whisker-7fdcf9f989-nm8zk" WorkloadEndpoint="ci--4515.1.0--n--4ecc98c3fd-k8s-whisker--7fdcf9f989--nm8zk-eth0" Jan 15 00:32:14.305636 containerd[1616]: 2026-01-15 00:32:14.134 [INFO][3886] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eb3add65ef3f45f49dd4902547c08b2d60c1eb3535f72b9606dfd293ec31f79e" HandleID="k8s-pod-network.eb3add65ef3f45f49dd4902547c08b2d60c1eb3535f72b9606dfd293ec31f79e" Workload="ci--4515.1.0--n--4ecc98c3fd-k8s-whisker--7fdcf9f989--nm8zk-eth0" Jan 15 00:32:14.306003 containerd[1616]: 2026-01-15 00:32:14.137 [INFO][3886] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="eb3add65ef3f45f49dd4902547c08b2d60c1eb3535f72b9606dfd293ec31f79e" HandleID="k8s-pod-network.eb3add65ef3f45f49dd4902547c08b2d60c1eb3535f72b9606dfd293ec31f79e" Workload="ci--4515.1.0--n--4ecc98c3fd-k8s-whisker--7fdcf9f989--nm8zk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c6fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515.1.0-n-4ecc98c3fd", "pod":"whisker-7fdcf9f989-nm8zk", "timestamp":"2026-01-15 00:32:14.134733597 +0000 UTC"}, Hostname:"ci-4515.1.0-n-4ecc98c3fd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 00:32:14.306003 containerd[1616]: 2026-01-15 00:32:14.137 [INFO][3886] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 15 00:32:14.306003 containerd[1616]: 2026-01-15 00:32:14.138 [INFO][3886] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 15 00:32:14.306003 containerd[1616]: 2026-01-15 00:32:14.138 [INFO][3886] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-n-4ecc98c3fd' Jan 15 00:32:14.306003 containerd[1616]: 2026-01-15 00:32:14.168 [INFO][3886] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.eb3add65ef3f45f49dd4902547c08b2d60c1eb3535f72b9606dfd293ec31f79e" host="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:32:14.306003 containerd[1616]: 2026-01-15 00:32:14.193 [INFO][3886] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:32:14.306003 containerd[1616]: 2026-01-15 00:32:14.200 [INFO][3886] ipam/ipam.go 511: Trying affinity for 192.168.6.0/26 host="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:32:14.306003 containerd[1616]: 2026-01-15 00:32:14.203 [INFO][3886] ipam/ipam.go 158: Attempting to load block cidr=192.168.6.0/26 host="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:32:14.306003 containerd[1616]: 2026-01-15 00:32:14.214 [INFO][3886] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.6.0/26 host="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:32:14.306419 containerd[1616]: 2026-01-15 00:32:14.214 [INFO][3886] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.6.0/26 handle="k8s-pod-network.eb3add65ef3f45f49dd4902547c08b2d60c1eb3535f72b9606dfd293ec31f79e" host="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:32:14.306419 containerd[1616]: 2026-01-15 00:32:14.219 [INFO][3886] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.eb3add65ef3f45f49dd4902547c08b2d60c1eb3535f72b9606dfd293ec31f79e Jan 15 00:32:14.306419 containerd[1616]: 2026-01-15 00:32:14.226 [INFO][3886] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.6.0/26 handle="k8s-pod-network.eb3add65ef3f45f49dd4902547c08b2d60c1eb3535f72b9606dfd293ec31f79e" host="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:32:14.306419 containerd[1616]: 2026-01-15 00:32:14.235 [INFO][3886] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.6.1/26] block=192.168.6.0/26 handle="k8s-pod-network.eb3add65ef3f45f49dd4902547c08b2d60c1eb3535f72b9606dfd293ec31f79e" host="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:32:14.306419 containerd[1616]: 2026-01-15 00:32:14.236 [INFO][3886] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.6.1/26] handle="k8s-pod-network.eb3add65ef3f45f49dd4902547c08b2d60c1eb3535f72b9606dfd293ec31f79e" host="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:32:14.306419 containerd[1616]: 2026-01-15 00:32:14.236 [INFO][3886] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 15 00:32:14.306419 containerd[1616]: 2026-01-15 00:32:14.236 [INFO][3886] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.6.1/26] IPv6=[] ContainerID="eb3add65ef3f45f49dd4902547c08b2d60c1eb3535f72b9606dfd293ec31f79e" HandleID="k8s-pod-network.eb3add65ef3f45f49dd4902547c08b2d60c1eb3535f72b9606dfd293ec31f79e" Workload="ci--4515.1.0--n--4ecc98c3fd-k8s-whisker--7fdcf9f989--nm8zk-eth0" Jan 15 00:32:14.306717 containerd[1616]: 2026-01-15 00:32:14.239 [INFO][3878] cni-plugin/k8s.go 418: Populated endpoint ContainerID="eb3add65ef3f45f49dd4902547c08b2d60c1eb3535f72b9606dfd293ec31f79e" Namespace="calico-system" Pod="whisker-7fdcf9f989-nm8zk" WorkloadEndpoint="ci--4515.1.0--n--4ecc98c3fd-k8s-whisker--7fdcf9f989--nm8zk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--n--4ecc98c3fd-k8s-whisker--7fdcf9f989--nm8zk-eth0", GenerateName:"whisker-7fdcf9f989-", Namespace:"calico-system", SelfLink:"", UID:"a4b16ce3-dca7-42f9-90d7-10ddcc6423d9", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 0, 32, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7fdcf9f989", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-n-4ecc98c3fd", ContainerID:"", Pod:"whisker-7fdcf9f989-nm8zk", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.6.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali093a1d1a87f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 00:32:14.306717 containerd[1616]: 2026-01-15 00:32:14.239 [INFO][3878] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.6.1/32] ContainerID="eb3add65ef3f45f49dd4902547c08b2d60c1eb3535f72b9606dfd293ec31f79e" Namespace="calico-system" Pod="whisker-7fdcf9f989-nm8zk" WorkloadEndpoint="ci--4515.1.0--n--4ecc98c3fd-k8s-whisker--7fdcf9f989--nm8zk-eth0" Jan 15 00:32:14.306843 containerd[1616]: 2026-01-15 00:32:14.239 [INFO][3878] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali093a1d1a87f ContainerID="eb3add65ef3f45f49dd4902547c08b2d60c1eb3535f72b9606dfd293ec31f79e" Namespace="calico-system" Pod="whisker-7fdcf9f989-nm8zk" WorkloadEndpoint="ci--4515.1.0--n--4ecc98c3fd-k8s-whisker--7fdcf9f989--nm8zk-eth0" Jan 15 00:32:14.306843 containerd[1616]: 2026-01-15 00:32:14.271 [INFO][3878] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eb3add65ef3f45f49dd4902547c08b2d60c1eb3535f72b9606dfd293ec31f79e" Namespace="calico-system" Pod="whisker-7fdcf9f989-nm8zk" WorkloadEndpoint="ci--4515.1.0--n--4ecc98c3fd-k8s-whisker--7fdcf9f989--nm8zk-eth0" Jan 15 00:32:14.306914 containerd[1616]: 2026-01-15 00:32:14.273 [INFO][3878] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="eb3add65ef3f45f49dd4902547c08b2d60c1eb3535f72b9606dfd293ec31f79e" Namespace="calico-system" Pod="whisker-7fdcf9f989-nm8zk" WorkloadEndpoint="ci--4515.1.0--n--4ecc98c3fd-k8s-whisker--7fdcf9f989--nm8zk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--n--4ecc98c3fd-k8s-whisker--7fdcf9f989--nm8zk-eth0", GenerateName:"whisker-7fdcf9f989-", Namespace:"calico-system", SelfLink:"", UID:"a4b16ce3-dca7-42f9-90d7-10ddcc6423d9", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 0, 32, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7fdcf9f989", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-n-4ecc98c3fd", ContainerID:"eb3add65ef3f45f49dd4902547c08b2d60c1eb3535f72b9606dfd293ec31f79e", Pod:"whisker-7fdcf9f989-nm8zk", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.6.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali093a1d1a87f", MAC:"86:82:2e:c2:42:95", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 00:32:14.306983 containerd[1616]: 2026-01-15 00:32:14.295 [INFO][3878] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="eb3add65ef3f45f49dd4902547c08b2d60c1eb3535f72b9606dfd293ec31f79e" Namespace="calico-system" Pod="whisker-7fdcf9f989-nm8zk" WorkloadEndpoint="ci--4515.1.0--n--4ecc98c3fd-k8s-whisker--7fdcf9f989--nm8zk-eth0" Jan 15 00:32:14.433574 kubelet[2785]: E0115 00:32:14.433530 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 15 00:32:14.587169 containerd[1616]: time="2026-01-15T00:32:14.586692885Z" level=info msg="connecting to shim eb3add65ef3f45f49dd4902547c08b2d60c1eb3535f72b9606dfd293ec31f79e" address="unix:///run/containerd/s/d4f553fe41f738cfcd3a2e86f19d27f716467c7e3b17a39f86508461714d0db3" namespace=k8s.io protocol=ttrpc version=3 Jan 15 00:32:14.672385 systemd[1]: Started cri-containerd-eb3add65ef3f45f49dd4902547c08b2d60c1eb3535f72b9606dfd293ec31f79e.scope - libcontainer container eb3add65ef3f45f49dd4902547c08b2d60c1eb3535f72b9606dfd293ec31f79e. Jan 15 00:32:14.711000 audit: BPF prog-id=175 op=LOAD Jan 15 00:32:14.713000 audit: BPF prog-id=176 op=LOAD Jan 15 00:32:14.713000 audit[4017]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4005 pid=4017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:14.713000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562336164643635656633663435663439646434393032353437633038 Jan 15 00:32:14.713000 audit: BPF prog-id=176 op=UNLOAD Jan 15 00:32:14.713000 audit[4017]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4005 pid=4017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:14.713000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562336164643635656633663435663439646434393032353437633038 Jan 15 00:32:14.713000 audit: BPF prog-id=177 op=LOAD Jan 15 00:32:14.713000 audit[4017]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4005 pid=4017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:14.713000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562336164643635656633663435663439646434393032353437633038 Jan 15 00:32:14.713000 audit: BPF prog-id=178 op=LOAD Jan 15 00:32:14.713000 audit[4017]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4005 pid=4017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:14.713000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562336164643635656633663435663439646434393032353437633038 Jan 15 00:32:14.713000 audit: BPF prog-id=178 op=UNLOAD Jan 15 00:32:14.713000 audit[4017]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4005 pid=4017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:14.713000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562336164643635656633663435663439646434393032353437633038 Jan 15 00:32:14.713000 audit: BPF prog-id=177 op=UNLOAD Jan 15 00:32:14.713000 audit[4017]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4005 pid=4017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:14.713000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562336164643635656633663435663439646434393032353437633038 Jan 15 00:32:14.713000 audit: BPF prog-id=179 op=LOAD Jan 15 00:32:14.713000 audit[4017]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4005 pid=4017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:14.713000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562336164643635656633663435663439646434393032353437633038 Jan 15 00:32:14.819364 containerd[1616]: time="2026-01-15T00:32:14.819303940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7fdcf9f989-nm8zk,Uid:a4b16ce3-dca7-42f9-90d7-10ddcc6423d9,Namespace:calico-system,Attempt:0,} returns sandbox id \"eb3add65ef3f45f49dd4902547c08b2d60c1eb3535f72b9606dfd293ec31f79e\"" Jan 15 00:32:14.824350 containerd[1616]: time="2026-01-15T00:32:14.824243527Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 15 00:32:14.989156 kubelet[2785]: I0115 00:32:14.988944 2785 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="388c3954-9ce8-4280-a01f-92fddc826177" path="/var/lib/kubelet/pods/388c3954-9ce8-4280-a01f-92fddc826177/volumes" Jan 15 00:32:15.113000 audit: BPF prog-id=180 op=LOAD Jan 15 00:32:15.113000 audit[4065]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd869a3e20 a2=98 a3=1fffffffffffffff items=0 ppid=3907 pid=4065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:15.113000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 15 00:32:15.115000 audit: BPF prog-id=180 op=UNLOAD Jan 15 00:32:15.115000 audit[4065]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd869a3df0 a3=0 items=0 ppid=3907 pid=4065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:15.115000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 15 00:32:15.116000 audit: BPF prog-id=181 op=LOAD Jan 15 00:32:15.116000 audit[4065]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd869a3d00 a2=94 a3=3 items=0 ppid=3907 pid=4065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:15.116000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 15 00:32:15.119000 audit: BPF prog-id=181 op=UNLOAD Jan 15 00:32:15.119000 audit[4065]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd869a3d00 a2=94 a3=3 items=0 ppid=3907 pid=4065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:15.119000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 15 00:32:15.119000 audit: BPF prog-id=182 op=LOAD Jan 15 00:32:15.119000 audit[4065]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd869a3d40 a2=94 a3=7ffd869a3f20 items=0 ppid=3907 pid=4065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:15.119000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 15 00:32:15.120000 audit: BPF prog-id=182 op=UNLOAD Jan 15 00:32:15.120000 audit[4065]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd869a3d40 a2=94 a3=7ffd869a3f20 items=0 ppid=3907 pid=4065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:15.120000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 15 00:32:15.148000 audit: BPF prog-id=183 op=LOAD Jan 15 00:32:15.148000 audit[4067]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdf76acbb0 a2=98 a3=3 items=0 ppid=3907 pid=4067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:15.148000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 00:32:15.150000 audit: BPF prog-id=183 op=UNLOAD Jan 15 00:32:15.150000 audit[4067]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffdf76acb80 a3=0 items=0 ppid=3907 pid=4067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:15.150000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 00:32:15.151000 audit: BPF prog-id=184 op=LOAD Jan 15 00:32:15.151000 audit[4067]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffdf76ac9a0 a2=94 a3=54428f items=0 ppid=3907 pid=4067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:15.151000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 00:32:15.153000 audit: BPF prog-id=184 op=UNLOAD Jan 15 00:32:15.153000 audit[4067]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffdf76ac9a0 a2=94 a3=54428f items=0 ppid=3907 pid=4067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:15.153000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 00:32:15.154000 audit: BPF prog-id=185 op=LOAD Jan 15 00:32:15.154000 audit[4067]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffdf76ac9d0 a2=94 a3=2 items=0 ppid=3907 pid=4067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:15.154000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 00:32:15.154000 audit: BPF prog-id=185 op=UNLOAD Jan 15 00:32:15.154000 audit[4067]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffdf76ac9d0 a2=0 a3=2 items=0 ppid=3907 pid=4067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:15.154000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 00:32:15.193518 containerd[1616]: time="2026-01-15T00:32:15.193464269Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:32:15.194707 containerd[1616]: time="2026-01-15T00:32:15.194400617Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 15 00:32:15.194707 containerd[1616]: time="2026-01-15T00:32:15.194461066Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 15 00:32:15.216214 kubelet[2785]: E0115 00:32:15.216077 2785 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 15 00:32:15.216561 kubelet[2785]: E0115 00:32:15.216420 2785 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 15 00:32:15.224289 kubelet[2785]: E0115 00:32:15.224120 2785 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:1c85d4f6dcf249e199926edb662227fb,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xp2m5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7fdcf9f989-nm8zk_calico-system(a4b16ce3-dca7-42f9-90d7-10ddcc6423d9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 15 00:32:15.228013 containerd[1616]: time="2026-01-15T00:32:15.227969582Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 15 00:32:15.448000 audit: BPF prog-id=186 op=LOAD Jan 15 00:32:15.448000 audit[4067]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffdf76ac890 a2=94 a3=1 items=0 ppid=3907 pid=4067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:15.448000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 00:32:15.448000 audit: BPF prog-id=186 op=UNLOAD Jan 15 00:32:15.448000 audit[4067]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffdf76ac890 a2=94 a3=1 items=0 ppid=3907 pid=4067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:15.448000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 00:32:15.467000 audit: BPF prog-id=187 op=LOAD Jan 15 00:32:15.467000 audit[4067]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffdf76ac880 a2=94 a3=4 items=0 ppid=3907 pid=4067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:15.467000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 00:32:15.468000 audit: BPF prog-id=187 op=UNLOAD Jan 15 00:32:15.468000 audit[4067]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffdf76ac880 a2=0 a3=4 items=0 ppid=3907 pid=4067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:15.468000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 00:32:15.469000 audit: BPF prog-id=188 op=LOAD Jan 15 00:32:15.469000 audit[4067]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffdf76ac6e0 a2=94 a3=5 items=0 ppid=3907 pid=4067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:15.469000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 00:32:15.469000 audit: BPF prog-id=188 op=UNLOAD Jan 15 00:32:15.469000 audit[4067]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffdf76ac6e0 a2=0 a3=5 items=0 ppid=3907 pid=4067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:15.469000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 00:32:15.469000 audit: BPF prog-id=189 op=LOAD Jan 15 00:32:15.469000 audit[4067]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffdf76ac900 a2=94 a3=6 items=0 ppid=3907 pid=4067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:15.469000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 00:32:15.469000 audit: BPF prog-id=189 op=UNLOAD Jan 15 00:32:15.469000 audit[4067]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffdf76ac900 a2=0 a3=6 items=0 ppid=3907 pid=4067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:15.469000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 00:32:15.471000 audit: BPF prog-id=190 op=LOAD Jan 15 00:32:15.471000 audit[4067]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffdf76ac0b0 a2=94 a3=88 items=0 ppid=3907 pid=4067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:15.471000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 00:32:15.471000 audit: BPF prog-id=191 op=LOAD Jan 15 00:32:15.471000 audit[4067]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffdf76abf30 a2=94 a3=2 items=0 ppid=3907 pid=4067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:15.471000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 00:32:15.471000 audit: BPF prog-id=191 op=UNLOAD Jan 15 00:32:15.471000 audit[4067]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffdf76abf60 a2=0 a3=7ffdf76ac060 items=0 ppid=3907 pid=4067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:15.471000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 00:32:15.472000 audit: BPF prog-id=190 op=UNLOAD Jan 15 00:32:15.472000 audit[4067]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=97f0d10 a2=0 a3=860fa4f4e9535a48 items=0 ppid=3907 pid=4067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:15.472000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 00:32:15.486000 audit: BPF prog-id=192 op=LOAD Jan 15 00:32:15.486000 audit[4089]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe198cdc60 a2=98 a3=1999999999999999 items=0 ppid=3907 pid=4089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:15.486000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 15 00:32:15.486000 audit: BPF prog-id=192 op=UNLOAD Jan 15 00:32:15.486000 audit[4089]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe198cdc30 a3=0 items=0 ppid=3907 pid=4089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:15.486000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 15 00:32:15.486000 audit: BPF prog-id=193 op=LOAD Jan 15 00:32:15.486000 audit[4089]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe198cdb40 a2=94 a3=ffff items=0 ppid=3907 pid=4089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:15.486000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 15 00:32:15.486000 audit: BPF prog-id=193 op=UNLOAD Jan 15 00:32:15.486000 audit[4089]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe198cdb40 a2=94 a3=ffff items=0 ppid=3907 pid=4089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:15.486000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 15 00:32:15.486000 audit: BPF prog-id=194 op=LOAD Jan 15 00:32:15.486000 audit[4089]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe198cdb80 a2=94 a3=7ffe198cdd60 items=0 ppid=3907 pid=4089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:15.486000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 15 00:32:15.486000 audit: BPF prog-id=194 op=UNLOAD Jan 15 00:32:15.486000 audit[4089]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe198cdb80 a2=94 a3=7ffe198cdd60 items=0 ppid=3907 pid=4089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:15.486000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 15 00:32:15.573367 systemd-networkd[1518]: vxlan.calico: Link UP Jan 15 00:32:15.573377 systemd-networkd[1518]: vxlan.calico: Gained carrier Jan 15 00:32:15.578700 containerd[1616]: time="2026-01-15T00:32:15.576900446Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:32:15.580579 containerd[1616]: time="2026-01-15T00:32:15.580498819Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 15 00:32:15.580745 containerd[1616]: time="2026-01-15T00:32:15.580642845Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 15 00:32:15.583747 kubelet[2785]: E0115 00:32:15.582868 2785 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 15 00:32:15.604002 kubelet[2785]: E0115 00:32:15.583098 2785 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 15 00:32:15.609052 kubelet[2785]: E0115 00:32:15.608578 2785 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xp2m5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7fdcf9f989-nm8zk_calico-system(a4b16ce3-dca7-42f9-90d7-10ddcc6423d9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 15 00:32:15.610547 kubelet[2785]: E0115 00:32:15.610183 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7fdcf9f989-nm8zk" podUID="a4b16ce3-dca7-42f9-90d7-10ddcc6423d9" Jan 15 00:32:15.651000 audit: BPF prog-id=195 op=LOAD Jan 15 00:32:15.651000 audit[4115]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc59cea7b0 a2=98 a3=0 items=0 ppid=3907 pid=4115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:15.651000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 15 00:32:15.653000 audit: BPF prog-id=195 op=UNLOAD Jan 15 00:32:15.653000 audit[4115]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffc59cea780 a3=0 items=0 ppid=3907 pid=4115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:15.653000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 15 00:32:15.653000 audit: BPF prog-id=196 op=LOAD Jan 15 00:32:15.653000 audit[4115]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc59cea5c0 a2=94 a3=54428f items=0 ppid=3907 pid=4115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:15.653000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 15 00:32:15.654000 audit: BPF prog-id=196 op=UNLOAD Jan 15 00:32:15.654000 audit[4115]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc59cea5c0 a2=94 a3=54428f items=0 ppid=3907 pid=4115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:15.654000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 15 00:32:15.655000 audit: BPF prog-id=197 op=LOAD Jan 15 00:32:15.655000 audit[4115]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc59cea5f0 a2=94 a3=2 items=0 ppid=3907 pid=4115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:15.655000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 15 00:32:15.656000 audit: BPF prog-id=197 op=UNLOAD Jan 15 00:32:15.656000 audit[4115]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc59cea5f0 a2=0 a3=2 items=0 ppid=3907 pid=4115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:15.656000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 15 00:32:15.656000 audit: BPF prog-id=198 op=LOAD Jan 15 00:32:15.656000 audit[4115]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc59cea3a0 a2=94 a3=4 items=0 ppid=3907 pid=4115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:15.656000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 15 00:32:15.656000 audit: BPF prog-id=198 op=UNLOAD Jan 15 00:32:15.656000 audit[4115]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc59cea3a0 a2=94 a3=4 items=0 ppid=3907 pid=4115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:15.656000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 15 00:32:15.656000 audit: BPF prog-id=199 op=LOAD Jan 15 00:32:15.656000 audit[4115]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc59cea4a0 a2=94 a3=7ffc59cea620 items=0 ppid=3907 pid=4115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:15.656000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 15 00:32:15.658000 audit: BPF prog-id=199 op=UNLOAD Jan 15 00:32:15.658000 audit[4115]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc59cea4a0 a2=0 a3=7ffc59cea620 items=0 ppid=3907 pid=4115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:15.658000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 15 00:32:15.660000 audit: BPF prog-id=200 op=LOAD Jan 15 00:32:15.660000 audit[4115]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc59ce9bd0 a2=94 a3=2 items=0 ppid=3907 pid=4115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:15.660000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 15 00:32:15.662000 audit: BPF prog-id=200 op=UNLOAD Jan 15 00:32:15.662000 audit[4115]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc59ce9bd0 a2=0 a3=2 items=0 ppid=3907 pid=4115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:15.662000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 15 00:32:15.662000 audit: BPF prog-id=201 op=LOAD Jan 15 00:32:15.662000 audit[4115]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc59ce9cd0 a2=94 a3=30 items=0 ppid=3907 pid=4115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:15.662000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 15 00:32:15.673000 audit: BPF prog-id=202 op=LOAD Jan 15 00:32:15.673000 audit[4119]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd621281f0 a2=98 a3=0 items=0 ppid=3907 pid=4119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:15.673000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 00:32:15.675000 audit: BPF prog-id=202 op=UNLOAD Jan 15 00:32:15.675000 audit[4119]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd621281c0 a3=0 items=0 ppid=3907 pid=4119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:15.675000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 00:32:15.675000 audit: BPF prog-id=203 op=LOAD Jan 15 00:32:15.675000 audit[4119]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd62127fe0 a2=94 a3=54428f items=0 ppid=3907 pid=4119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:15.675000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 00:32:15.675000 audit: BPF prog-id=203 op=UNLOAD Jan 15 00:32:15.675000 audit[4119]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd62127fe0 a2=94 a3=54428f items=0 ppid=3907 pid=4119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:15.675000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 00:32:15.675000 audit: BPF prog-id=204 op=LOAD Jan 15 00:32:15.675000 audit[4119]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd62128010 a2=94 a3=2 items=0 ppid=3907 pid=4119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:15.675000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 00:32:15.675000 audit: BPF prog-id=204 op=UNLOAD Jan 15 00:32:15.675000 audit[4119]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd62128010 a2=0 a3=2 items=0 ppid=3907 pid=4119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:15.675000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 00:32:15.887000 audit: BPF prog-id=205 op=LOAD Jan 15 00:32:15.887000 audit[4119]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd62127ed0 a2=94 a3=1 items=0 ppid=3907 pid=4119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:15.887000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 00:32:15.887000 audit: BPF prog-id=205 op=UNLOAD Jan 15 00:32:15.887000 audit[4119]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd62127ed0 a2=94 a3=1 items=0 ppid=3907 pid=4119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:15.887000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 00:32:15.900000 audit: BPF prog-id=206 op=LOAD Jan 15 00:32:15.900000 audit[4119]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd62127ec0 a2=94 a3=4 items=0 ppid=3907 pid=4119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:15.900000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 00:32:15.901000 audit: BPF prog-id=206 op=UNLOAD Jan 15 00:32:15.901000 audit[4119]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffd62127ec0 a2=0 a3=4 items=0 ppid=3907 pid=4119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:15.901000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 00:32:15.901000 audit: BPF prog-id=207 op=LOAD Jan 15 00:32:15.901000 audit[4119]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd62127d20 a2=94 a3=5 items=0 ppid=3907 pid=4119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:15.901000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 00:32:15.901000 audit: BPF prog-id=207 op=UNLOAD Jan 15 00:32:15.901000 audit[4119]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd62127d20 a2=0 a3=5 items=0 ppid=3907 pid=4119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:15.901000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 00:32:15.901000 audit: BPF prog-id=208 op=LOAD Jan 15 00:32:15.901000 audit[4119]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd62127f40 a2=94 a3=6 items=0 ppid=3907 pid=4119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:15.901000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 00:32:15.901000 audit: BPF prog-id=208 op=UNLOAD Jan 15 00:32:15.901000 audit[4119]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffd62127f40 a2=0 a3=6 items=0 ppid=3907 pid=4119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:15.901000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 00:32:15.902000 audit: BPF prog-id=209 op=LOAD Jan 15 00:32:15.902000 audit[4119]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd621276f0 a2=94 a3=88 items=0 ppid=3907 pid=4119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:15.902000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 00:32:15.902000 audit: BPF prog-id=210 op=LOAD Jan 15 00:32:15.902000 audit[4119]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffd62127570 a2=94 a3=2 items=0 ppid=3907 pid=4119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:15.902000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 00:32:15.902000 audit: BPF prog-id=210 op=UNLOAD Jan 15 00:32:15.902000 audit[4119]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffd621275a0 a2=0 a3=7ffd621276a0 items=0 ppid=3907 pid=4119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:15.902000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 00:32:15.903000 audit: BPF prog-id=209 op=UNLOAD Jan 15 00:32:15.903000 audit[4119]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=51cfd10 a2=0 a3=b667d08498e64bab items=0 ppid=3907 pid=4119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:15.903000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 00:32:15.908000 audit: BPF prog-id=201 op=UNLOAD Jan 15 00:32:15.908000 audit[3907]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c000e622c0 a2=0 a3=0 items=0 ppid=3903 pid=3907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:15.908000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 15 00:32:15.937458 systemd-networkd[1518]: cali093a1d1a87f: Gained IPv6LL Jan 15 00:32:15.987737 containerd[1616]: time="2026-01-15T00:32:15.987529933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rjlcz,Uid:14ced92d-cf89-41f0-99bf-edc9c92a737b,Namespace:calico-system,Attempt:0,}" Jan 15 00:32:16.022000 audit[4151]: NETFILTER_CFG table=mangle:121 family=2 entries=16 op=nft_register_chain pid=4151 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 15 00:32:16.023000 audit[4152]: NETFILTER_CFG table=nat:122 family=2 entries=15 op=nft_register_chain pid=4152 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 15 00:32:16.023000 audit[4152]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffd19fe2700 a2=0 a3=7ffd19fe26ec items=0 ppid=3907 pid=4152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:16.023000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 15 00:32:16.022000 audit[4151]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7fff172dff80 a2=0 a3=7fff172dff6c items=0 ppid=3907 pid=4151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:16.022000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 15 00:32:16.032000 audit[4146]: NETFILTER_CFG table=raw:123 family=2 entries=21 op=nft_register_chain pid=4146 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 15 00:32:16.032000 audit[4146]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffcacbbbff0 a2=0 a3=7ffcacbbbfdc items=0 ppid=3907 pid=4146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:16.032000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 15 00:32:16.058000 audit[4157]: NETFILTER_CFG table=filter:124 family=2 entries=94 op=nft_register_chain pid=4157 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 15 00:32:16.058000 audit[4157]: SYSCALL arch=c000003e syscall=46 success=yes exit=53116 a0=3 a1=7ffdb767b7a0 a2=0 a3=7ffdb767b78c items=0 ppid=3907 pid=4157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:16.058000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 15 00:32:16.230660 systemd-networkd[1518]: caliccbc21f5f7c: Link UP Jan 15 00:32:16.236255 systemd-networkd[1518]: caliccbc21f5f7c: Gained carrier Jan 15 00:32:16.265587 containerd[1616]: 2026-01-15 00:32:16.085 [INFO][4141] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--n--4ecc98c3fd-k8s-csi--node--driver--rjlcz-eth0 csi-node-driver- calico-system 14ced92d-cf89-41f0-99bf-edc9c92a737b 809 0 2026-01-15 00:31:53 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4515.1.0-n-4ecc98c3fd csi-node-driver-rjlcz eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliccbc21f5f7c [] [] }} ContainerID="554b6d2d156013b6671f7db79565c58c4ddb2c951a15d93eeb3fd60dde8af951" Namespace="calico-system" Pod="csi-node-driver-rjlcz" WorkloadEndpoint="ci--4515.1.0--n--4ecc98c3fd-k8s-csi--node--driver--rjlcz-" Jan 15 00:32:16.265587 containerd[1616]: 2026-01-15 00:32:16.086 [INFO][4141] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="554b6d2d156013b6671f7db79565c58c4ddb2c951a15d93eeb3fd60dde8af951" Namespace="calico-system" Pod="csi-node-driver-rjlcz" WorkloadEndpoint="ci--4515.1.0--n--4ecc98c3fd-k8s-csi--node--driver--rjlcz-eth0" Jan 15 00:32:16.265587 containerd[1616]: 2026-01-15 00:32:16.157 [INFO][4166] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="554b6d2d156013b6671f7db79565c58c4ddb2c951a15d93eeb3fd60dde8af951" HandleID="k8s-pod-network.554b6d2d156013b6671f7db79565c58c4ddb2c951a15d93eeb3fd60dde8af951" Workload="ci--4515.1.0--n--4ecc98c3fd-k8s-csi--node--driver--rjlcz-eth0" Jan 15 00:32:16.266297 containerd[1616]: 2026-01-15 00:32:16.157 [INFO][4166] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="554b6d2d156013b6671f7db79565c58c4ddb2c951a15d93eeb3fd60dde8af951" HandleID="k8s-pod-network.554b6d2d156013b6671f7db79565c58c4ddb2c951a15d93eeb3fd60dde8af951" Workload="ci--4515.1.0--n--4ecc98c3fd-k8s-csi--node--driver--rjlcz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5720), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515.1.0-n-4ecc98c3fd", "pod":"csi-node-driver-rjlcz", "timestamp":"2026-01-15 00:32:16.157036955 +0000 UTC"}, Hostname:"ci-4515.1.0-n-4ecc98c3fd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 00:32:16.266297 containerd[1616]: 2026-01-15 00:32:16.157 [INFO][4166] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 15 00:32:16.266297 containerd[1616]: 2026-01-15 00:32:16.157 [INFO][4166] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 15 00:32:16.266297 containerd[1616]: 2026-01-15 00:32:16.157 [INFO][4166] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-n-4ecc98c3fd' Jan 15 00:32:16.266297 containerd[1616]: 2026-01-15 00:32:16.167 [INFO][4166] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.554b6d2d156013b6671f7db79565c58c4ddb2c951a15d93eeb3fd60dde8af951" host="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:32:16.266297 containerd[1616]: 2026-01-15 00:32:16.176 [INFO][4166] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:32:16.266297 containerd[1616]: 2026-01-15 00:32:16.185 [INFO][4166] ipam/ipam.go 511: Trying affinity for 192.168.6.0/26 host="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:32:16.266297 containerd[1616]: 2026-01-15 00:32:16.191 [INFO][4166] ipam/ipam.go 158: Attempting to load block cidr=192.168.6.0/26 host="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:32:16.266297 containerd[1616]: 2026-01-15 00:32:16.196 [INFO][4166] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.6.0/26 host="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:32:16.268587 containerd[1616]: 2026-01-15 00:32:16.196 [INFO][4166] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.6.0/26 handle="k8s-pod-network.554b6d2d156013b6671f7db79565c58c4ddb2c951a15d93eeb3fd60dde8af951" host="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:32:16.268587 containerd[1616]: 2026-01-15 00:32:16.200 [INFO][4166] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.554b6d2d156013b6671f7db79565c58c4ddb2c951a15d93eeb3fd60dde8af951 Jan 15 00:32:16.268587 containerd[1616]: 2026-01-15 00:32:16.207 [INFO][4166] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.6.0/26 handle="k8s-pod-network.554b6d2d156013b6671f7db79565c58c4ddb2c951a15d93eeb3fd60dde8af951" host="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:32:16.268587 containerd[1616]: 2026-01-15 00:32:16.216 [INFO][4166] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.6.2/26] block=192.168.6.0/26 handle="k8s-pod-network.554b6d2d156013b6671f7db79565c58c4ddb2c951a15d93eeb3fd60dde8af951" host="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:32:16.268587 containerd[1616]: 2026-01-15 00:32:16.216 [INFO][4166] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.6.2/26] handle="k8s-pod-network.554b6d2d156013b6671f7db79565c58c4ddb2c951a15d93eeb3fd60dde8af951" host="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:32:16.268587 containerd[1616]: 2026-01-15 00:32:16.216 [INFO][4166] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 15 00:32:16.268587 containerd[1616]: 2026-01-15 00:32:16.216 [INFO][4166] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.6.2/26] IPv6=[] ContainerID="554b6d2d156013b6671f7db79565c58c4ddb2c951a15d93eeb3fd60dde8af951" HandleID="k8s-pod-network.554b6d2d156013b6671f7db79565c58c4ddb2c951a15d93eeb3fd60dde8af951" Workload="ci--4515.1.0--n--4ecc98c3fd-k8s-csi--node--driver--rjlcz-eth0" Jan 15 00:32:16.268931 containerd[1616]: 2026-01-15 00:32:16.224 [INFO][4141] cni-plugin/k8s.go 418: Populated endpoint ContainerID="554b6d2d156013b6671f7db79565c58c4ddb2c951a15d93eeb3fd60dde8af951" Namespace="calico-system" Pod="csi-node-driver-rjlcz" WorkloadEndpoint="ci--4515.1.0--n--4ecc98c3fd-k8s-csi--node--driver--rjlcz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--n--4ecc98c3fd-k8s-csi--node--driver--rjlcz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"14ced92d-cf89-41f0-99bf-edc9c92a737b", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 0, 31, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-n-4ecc98c3fd", ContainerID:"", Pod:"csi-node-driver-rjlcz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.6.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliccbc21f5f7c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 00:32:16.269111 containerd[1616]: 2026-01-15 00:32:16.224 [INFO][4141] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.6.2/32] ContainerID="554b6d2d156013b6671f7db79565c58c4ddb2c951a15d93eeb3fd60dde8af951" Namespace="calico-system" Pod="csi-node-driver-rjlcz" WorkloadEndpoint="ci--4515.1.0--n--4ecc98c3fd-k8s-csi--node--driver--rjlcz-eth0" Jan 15 00:32:16.269111 containerd[1616]: 2026-01-15 00:32:16.225 [INFO][4141] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliccbc21f5f7c ContainerID="554b6d2d156013b6671f7db79565c58c4ddb2c951a15d93eeb3fd60dde8af951" Namespace="calico-system" Pod="csi-node-driver-rjlcz" WorkloadEndpoint="ci--4515.1.0--n--4ecc98c3fd-k8s-csi--node--driver--rjlcz-eth0" Jan 15 00:32:16.269111 containerd[1616]: 2026-01-15 00:32:16.231 [INFO][4141] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="554b6d2d156013b6671f7db79565c58c4ddb2c951a15d93eeb3fd60dde8af951" Namespace="calico-system" Pod="csi-node-driver-rjlcz" WorkloadEndpoint="ci--4515.1.0--n--4ecc98c3fd-k8s-csi--node--driver--rjlcz-eth0" Jan 15 00:32:16.269308 containerd[1616]: 2026-01-15 00:32:16.232 [INFO][4141] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="554b6d2d156013b6671f7db79565c58c4ddb2c951a15d93eeb3fd60dde8af951" Namespace="calico-system" Pod="csi-node-driver-rjlcz" WorkloadEndpoint="ci--4515.1.0--n--4ecc98c3fd-k8s-csi--node--driver--rjlcz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--n--4ecc98c3fd-k8s-csi--node--driver--rjlcz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"14ced92d-cf89-41f0-99bf-edc9c92a737b", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 0, 31, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-n-4ecc98c3fd", ContainerID:"554b6d2d156013b6671f7db79565c58c4ddb2c951a15d93eeb3fd60dde8af951", Pod:"csi-node-driver-rjlcz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.6.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliccbc21f5f7c", MAC:"2e:a2:0e:4f:96:74", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 00:32:16.269450 containerd[1616]: 2026-01-15 00:32:16.259 [INFO][4141] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="554b6d2d156013b6671f7db79565c58c4ddb2c951a15d93eeb3fd60dde8af951" Namespace="calico-system" Pod="csi-node-driver-rjlcz" WorkloadEndpoint="ci--4515.1.0--n--4ecc98c3fd-k8s-csi--node--driver--rjlcz-eth0" Jan 15 00:32:16.297000 audit[4183]: NETFILTER_CFG table=filter:125 family=2 entries=36 op=nft_register_chain pid=4183 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 15 00:32:16.297000 audit[4183]: SYSCALL arch=c000003e syscall=46 success=yes exit=19576 a0=3 a1=7ffe507a8bf0 a2=0 a3=7ffe507a8bdc items=0 ppid=3907 pid=4183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:16.297000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 15 00:32:16.306272 containerd[1616]: time="2026-01-15T00:32:16.306187114Z" level=info msg="connecting to shim 554b6d2d156013b6671f7db79565c58c4ddb2c951a15d93eeb3fd60dde8af951" address="unix:///run/containerd/s/682c7fe80f9ce5fc8d4465c41b6f62e8f29802be37379a29a92f767e12edb920" namespace=k8s.io protocol=ttrpc version=3 Jan 15 00:32:16.362413 systemd[1]: Started cri-containerd-554b6d2d156013b6671f7db79565c58c4ddb2c951a15d93eeb3fd60dde8af951.scope - libcontainer container 554b6d2d156013b6671f7db79565c58c4ddb2c951a15d93eeb3fd60dde8af951. Jan 15 00:32:16.383000 audit: BPF prog-id=211 op=LOAD Jan 15 00:32:16.384000 audit: BPF prog-id=212 op=LOAD Jan 15 00:32:16.384000 audit[4205]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=4192 pid=4205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:16.384000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535346236643264313536303133623636373166376462373935363563 Jan 15 00:32:16.384000 audit: BPF prog-id=212 op=UNLOAD Jan 15 00:32:16.384000 audit[4205]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4192 pid=4205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:16.384000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535346236643264313536303133623636373166376462373935363563 Jan 15 00:32:16.385000 audit: BPF prog-id=213 op=LOAD Jan 15 00:32:16.385000 audit[4205]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=4192 pid=4205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:16.385000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535346236643264313536303133623636373166376462373935363563 Jan 15 00:32:16.385000 audit: BPF prog-id=214 op=LOAD Jan 15 00:32:16.385000 audit[4205]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=4192 pid=4205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:16.385000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535346236643264313536303133623636373166376462373935363563 Jan 15 00:32:16.385000 audit: BPF prog-id=214 op=UNLOAD Jan 15 00:32:16.385000 audit[4205]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4192 pid=4205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:16.385000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535346236643264313536303133623636373166376462373935363563 Jan 15 00:32:16.385000 audit: BPF prog-id=213 op=UNLOAD Jan 15 00:32:16.385000 audit[4205]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4192 pid=4205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:16.385000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535346236643264313536303133623636373166376462373935363563 Jan 15 00:32:16.385000 audit: BPF prog-id=215 op=LOAD Jan 15 00:32:16.385000 audit[4205]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=4192 pid=4205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:16.385000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535346236643264313536303133623636373166376462373935363563 Jan 15 00:32:16.410506 containerd[1616]: time="2026-01-15T00:32:16.410458838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rjlcz,Uid:14ced92d-cf89-41f0-99bf-edc9c92a737b,Namespace:calico-system,Attempt:0,} returns sandbox id \"554b6d2d156013b6671f7db79565c58c4ddb2c951a15d93eeb3fd60dde8af951\"" Jan 15 00:32:16.413547 containerd[1616]: time="2026-01-15T00:32:16.413455074Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 15 00:32:16.448264 kubelet[2785]: E0115 00:32:16.447450 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7fdcf9f989-nm8zk" podUID="a4b16ce3-dca7-42f9-90d7-10ddcc6423d9" Jan 15 00:32:16.533000 audit[4232]: NETFILTER_CFG table=filter:126 family=2 entries=20 op=nft_register_rule pid=4232 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:32:16.533000 audit[4232]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffdec769280 a2=0 a3=7ffdec76926c items=0 ppid=2937 pid=4232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:16.533000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:32:16.540000 audit[4232]: NETFILTER_CFG table=nat:127 family=2 entries=14 op=nft_register_rule pid=4232 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:32:16.540000 audit[4232]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffdec769280 a2=0 a3=0 items=0 ppid=2937 pid=4232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:16.540000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:32:16.774888 containerd[1616]: time="2026-01-15T00:32:16.774690118Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:32:16.775776 containerd[1616]: time="2026-01-15T00:32:16.775635357Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 15 00:32:16.775776 containerd[1616]: time="2026-01-15T00:32:16.775686626Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 15 00:32:16.776381 kubelet[2785]: E0115 00:32:16.776278 2785 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 15 00:32:16.777327 kubelet[2785]: E0115 00:32:16.776900 2785 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 15 00:32:16.777327 kubelet[2785]: E0115 00:32:16.777250 2785 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-brgll,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rjlcz_calico-system(14ced92d-cf89-41f0-99bf-edc9c92a737b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 15 00:32:16.780924 containerd[1616]: time="2026-01-15T00:32:16.780852941Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 15 00:32:16.987387 containerd[1616]: time="2026-01-15T00:32:16.987175994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-586c796f68-gf7fx,Uid:b432d05d-ed71-4758-b9af-7738bf34afb7,Namespace:calico-apiserver,Attempt:0,}" Jan 15 00:32:16.989837 kubelet[2785]: E0115 00:32:16.988332 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 15 00:32:16.989837 kubelet[2785]: E0115 00:32:16.989603 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 15 00:32:16.990972 containerd[1616]: time="2026-01-15T00:32:16.990378815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-n8h7v,Uid:5e2ebb1c-cdf8-4c57-934e-7ae859fc7427,Namespace:kube-system,Attempt:0,}" Jan 15 00:32:16.990972 containerd[1616]: time="2026-01-15T00:32:16.990810254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d4f97847b-lrvs5,Uid:5adbdfdd-96a2-41eb-8663-7460bd3865b9,Namespace:calico-system,Attempt:0,}" Jan 15 00:32:16.991285 containerd[1616]: time="2026-01-15T00:32:16.991244328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cxd6l,Uid:bd9fdd13-b944-49f0-8efe-4c6c4031a849,Namespace:kube-system,Attempt:0,}" Jan 15 00:32:17.136078 containerd[1616]: time="2026-01-15T00:32:17.135441025Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:32:17.142254 containerd[1616]: time="2026-01-15T00:32:17.141380761Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 15 00:32:17.143075 containerd[1616]: time="2026-01-15T00:32:17.142845697Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 15 00:32:17.145499 kubelet[2785]: E0115 00:32:17.144575 2785 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 15 00:32:17.145499 kubelet[2785]: E0115 00:32:17.144767 2785 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 15 00:32:17.146042 kubelet[2785]: E0115 00:32:17.144980 2785 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-brgll,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rjlcz_calico-system(14ced92d-cf89-41f0-99bf-edc9c92a737b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 15 00:32:17.147458 kubelet[2785]: E0115 00:32:17.147031 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rjlcz" podUID="14ced92d-cf89-41f0-99bf-edc9c92a737b" Jan 15 00:32:17.153253 systemd-networkd[1518]: vxlan.calico: Gained IPv6LL Jan 15 00:32:17.357414 systemd-networkd[1518]: calid38bf3ec5df: Link UP Jan 15 00:32:17.361371 systemd-networkd[1518]: calid38bf3ec5df: Gained carrier Jan 15 00:32:17.391801 containerd[1616]: 2026-01-15 00:32:17.158 [INFO][4233] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--n--4ecc98c3fd-k8s-calico--apiserver--586c796f68--gf7fx-eth0 calico-apiserver-586c796f68- calico-apiserver b432d05d-ed71-4758-b9af-7738bf34afb7 892 0 2026-01-15 00:31:47 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:586c796f68 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4515.1.0-n-4ecc98c3fd calico-apiserver-586c796f68-gf7fx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid38bf3ec5df [] [] }} ContainerID="5e2404c2a290edecd02ffd86b2718c16d8aa8a56fa550fe3311011453a6dc346" Namespace="calico-apiserver" Pod="calico-apiserver-586c796f68-gf7fx" WorkloadEndpoint="ci--4515.1.0--n--4ecc98c3fd-k8s-calico--apiserver--586c796f68--gf7fx-" Jan 15 00:32:17.391801 containerd[1616]: 2026-01-15 00:32:17.159 [INFO][4233] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5e2404c2a290edecd02ffd86b2718c16d8aa8a56fa550fe3311011453a6dc346" Namespace="calico-apiserver" Pod="calico-apiserver-586c796f68-gf7fx" WorkloadEndpoint="ci--4515.1.0--n--4ecc98c3fd-k8s-calico--apiserver--586c796f68--gf7fx-eth0" Jan 15 00:32:17.391801 containerd[1616]: 2026-01-15 00:32:17.260 [INFO][4284] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5e2404c2a290edecd02ffd86b2718c16d8aa8a56fa550fe3311011453a6dc346" HandleID="k8s-pod-network.5e2404c2a290edecd02ffd86b2718c16d8aa8a56fa550fe3311011453a6dc346" Workload="ci--4515.1.0--n--4ecc98c3fd-k8s-calico--apiserver--586c796f68--gf7fx-eth0" Jan 15 00:32:17.394919 containerd[1616]: 2026-01-15 00:32:17.261 [INFO][4284] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5e2404c2a290edecd02ffd86b2718c16d8aa8a56fa550fe3311011453a6dc346" HandleID="k8s-pod-network.5e2404c2a290edecd02ffd86b2718c16d8aa8a56fa550fe3311011453a6dc346" Workload="ci--4515.1.0--n--4ecc98c3fd-k8s-calico--apiserver--586c796f68--gf7fx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024efe0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4515.1.0-n-4ecc98c3fd", "pod":"calico-apiserver-586c796f68-gf7fx", "timestamp":"2026-01-15 00:32:17.260490527 +0000 UTC"}, Hostname:"ci-4515.1.0-n-4ecc98c3fd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 00:32:17.394919 containerd[1616]: 2026-01-15 00:32:17.261 [INFO][4284] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 15 00:32:17.394919 containerd[1616]: 2026-01-15 00:32:17.261 [INFO][4284] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 15 00:32:17.394919 containerd[1616]: 2026-01-15 00:32:17.261 [INFO][4284] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-n-4ecc98c3fd' Jan 15 00:32:17.394919 containerd[1616]: 2026-01-15 00:32:17.278 [INFO][4284] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5e2404c2a290edecd02ffd86b2718c16d8aa8a56fa550fe3311011453a6dc346" host="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:32:17.394919 containerd[1616]: 2026-01-15 00:32:17.294 [INFO][4284] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:32:17.394919 containerd[1616]: 2026-01-15 00:32:17.309 [INFO][4284] ipam/ipam.go 511: Trying affinity for 192.168.6.0/26 host="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:32:17.394919 containerd[1616]: 2026-01-15 00:32:17.315 [INFO][4284] ipam/ipam.go 158: Attempting to load block cidr=192.168.6.0/26 host="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:32:17.394919 containerd[1616]: 2026-01-15 00:32:17.322 [INFO][4284] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.6.0/26 host="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:32:17.397368 containerd[1616]: 2026-01-15 00:32:17.322 [INFO][4284] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.6.0/26 handle="k8s-pod-network.5e2404c2a290edecd02ffd86b2718c16d8aa8a56fa550fe3311011453a6dc346" host="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:32:17.397368 containerd[1616]: 2026-01-15 00:32:17.326 [INFO][4284] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5e2404c2a290edecd02ffd86b2718c16d8aa8a56fa550fe3311011453a6dc346 Jan 15 00:32:17.397368 containerd[1616]: 2026-01-15 00:32:17.334 [INFO][4284] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.6.0/26 handle="k8s-pod-network.5e2404c2a290edecd02ffd86b2718c16d8aa8a56fa550fe3311011453a6dc346" host="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:32:17.397368 containerd[1616]: 2026-01-15 00:32:17.343 [INFO][4284] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.6.3/26] block=192.168.6.0/26 handle="k8s-pod-network.5e2404c2a290edecd02ffd86b2718c16d8aa8a56fa550fe3311011453a6dc346" host="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:32:17.397368 containerd[1616]: 2026-01-15 00:32:17.343 [INFO][4284] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.6.3/26] handle="k8s-pod-network.5e2404c2a290edecd02ffd86b2718c16d8aa8a56fa550fe3311011453a6dc346" host="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:32:17.397368 containerd[1616]: 2026-01-15 00:32:17.344 [INFO][4284] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 15 00:32:17.397368 containerd[1616]: 2026-01-15 00:32:17.344 [INFO][4284] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.6.3/26] IPv6=[] ContainerID="5e2404c2a290edecd02ffd86b2718c16d8aa8a56fa550fe3311011453a6dc346" HandleID="k8s-pod-network.5e2404c2a290edecd02ffd86b2718c16d8aa8a56fa550fe3311011453a6dc346" Workload="ci--4515.1.0--n--4ecc98c3fd-k8s-calico--apiserver--586c796f68--gf7fx-eth0" Jan 15 00:32:17.397574 containerd[1616]: 2026-01-15 00:32:17.349 [INFO][4233] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5e2404c2a290edecd02ffd86b2718c16d8aa8a56fa550fe3311011453a6dc346" Namespace="calico-apiserver" Pod="calico-apiserver-586c796f68-gf7fx" WorkloadEndpoint="ci--4515.1.0--n--4ecc98c3fd-k8s-calico--apiserver--586c796f68--gf7fx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--n--4ecc98c3fd-k8s-calico--apiserver--586c796f68--gf7fx-eth0", GenerateName:"calico-apiserver-586c796f68-", Namespace:"calico-apiserver", SelfLink:"", UID:"b432d05d-ed71-4758-b9af-7738bf34afb7", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 0, 31, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"586c796f68", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-n-4ecc98c3fd", ContainerID:"", Pod:"calico-apiserver-586c796f68-gf7fx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.6.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid38bf3ec5df", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 00:32:17.397651 containerd[1616]: 2026-01-15 00:32:17.349 [INFO][4233] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.6.3/32] ContainerID="5e2404c2a290edecd02ffd86b2718c16d8aa8a56fa550fe3311011453a6dc346" Namespace="calico-apiserver" Pod="calico-apiserver-586c796f68-gf7fx" WorkloadEndpoint="ci--4515.1.0--n--4ecc98c3fd-k8s-calico--apiserver--586c796f68--gf7fx-eth0" Jan 15 00:32:17.397651 containerd[1616]: 2026-01-15 00:32:17.349 [INFO][4233] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid38bf3ec5df ContainerID="5e2404c2a290edecd02ffd86b2718c16d8aa8a56fa550fe3311011453a6dc346" Namespace="calico-apiserver" Pod="calico-apiserver-586c796f68-gf7fx" WorkloadEndpoint="ci--4515.1.0--n--4ecc98c3fd-k8s-calico--apiserver--586c796f68--gf7fx-eth0" Jan 15 00:32:17.397651 containerd[1616]: 2026-01-15 00:32:17.364 [INFO][4233] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5e2404c2a290edecd02ffd86b2718c16d8aa8a56fa550fe3311011453a6dc346" Namespace="calico-apiserver" Pod="calico-apiserver-586c796f68-gf7fx" WorkloadEndpoint="ci--4515.1.0--n--4ecc98c3fd-k8s-calico--apiserver--586c796f68--gf7fx-eth0" Jan 15 00:32:17.397727 containerd[1616]: 2026-01-15 00:32:17.364 [INFO][4233] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5e2404c2a290edecd02ffd86b2718c16d8aa8a56fa550fe3311011453a6dc346" Namespace="calico-apiserver" Pod="calico-apiserver-586c796f68-gf7fx" WorkloadEndpoint="ci--4515.1.0--n--4ecc98c3fd-k8s-calico--apiserver--586c796f68--gf7fx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--n--4ecc98c3fd-k8s-calico--apiserver--586c796f68--gf7fx-eth0", GenerateName:"calico-apiserver-586c796f68-", Namespace:"calico-apiserver", SelfLink:"", UID:"b432d05d-ed71-4758-b9af-7738bf34afb7", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 0, 31, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"586c796f68", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-n-4ecc98c3fd", ContainerID:"5e2404c2a290edecd02ffd86b2718c16d8aa8a56fa550fe3311011453a6dc346", Pod:"calico-apiserver-586c796f68-gf7fx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.6.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid38bf3ec5df", MAC:"82:ca:a5:73:a1:28", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 00:32:17.397781 containerd[1616]: 2026-01-15 00:32:17.380 [INFO][4233] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5e2404c2a290edecd02ffd86b2718c16d8aa8a56fa550fe3311011453a6dc346" Namespace="calico-apiserver" Pod="calico-apiserver-586c796f68-gf7fx" WorkloadEndpoint="ci--4515.1.0--n--4ecc98c3fd-k8s-calico--apiserver--586c796f68--gf7fx-eth0" Jan 15 00:32:17.441613 containerd[1616]: time="2026-01-15T00:32:17.441298543Z" level=info msg="connecting to shim 5e2404c2a290edecd02ffd86b2718c16d8aa8a56fa550fe3311011453a6dc346" address="unix:///run/containerd/s/470c42199ed021a78a89ef5bc4387d3f7cfc3aa1584c214b20a7ea5790ae1005" namespace=k8s.io protocol=ttrpc version=3 Jan 15 00:32:17.459722 kubelet[2785]: E0115 00:32:17.459639 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rjlcz" podUID="14ced92d-cf89-41f0-99bf-edc9c92a737b" Jan 15 00:32:17.501211 systemd-networkd[1518]: cali60c829703c8: Link UP Jan 15 00:32:17.506216 systemd-networkd[1518]: cali60c829703c8: Gained carrier Jan 15 00:32:17.533000 audit[4335]: NETFILTER_CFG table=filter:128 family=2 entries=54 op=nft_register_chain pid=4335 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 15 00:32:17.536316 kernel: kauditd_printk_skb: 256 callbacks suppressed Jan 15 00:32:17.536461 kernel: audit: type=1325 audit(1768437137.533:670): table=filter:128 family=2 entries=54 op=nft_register_chain pid=4335 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 15 00:32:17.533000 audit[4335]: SYSCALL arch=c000003e syscall=46 success=yes exit=29396 a0=3 a1=7ffcf8059e10 a2=0 a3=7ffcf8059dfc items=0 ppid=3907 pid=4335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:17.545284 kernel: audit: type=1300 audit(1768437137.533:670): arch=c000003e syscall=46 success=yes exit=29396 a0=3 a1=7ffcf8059e10 a2=0 a3=7ffcf8059dfc items=0 ppid=3907 pid=4335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:17.533000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 15 00:32:17.550100 kernel: audit: type=1327 audit(1768437137.533:670): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 15 00:32:17.551673 containerd[1616]: 2026-01-15 00:32:17.132 [INFO][4254] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--n--4ecc98c3fd-k8s-calico--kube--controllers--7d4f97847b--lrvs5-eth0 calico-kube-controllers-7d4f97847b- calico-system 5adbdfdd-96a2-41eb-8663-7460bd3865b9 890 0 2026-01-15 00:31:53 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7d4f97847b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4515.1.0-n-4ecc98c3fd calico-kube-controllers-7d4f97847b-lrvs5 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali60c829703c8 [] [] }} ContainerID="2a40b460fec1a559eefa0ce9fc09e205ce706b2050fe9c6bed047d22ae47c914" Namespace="calico-system" Pod="calico-kube-controllers-7d4f97847b-lrvs5" WorkloadEndpoint="ci--4515.1.0--n--4ecc98c3fd-k8s-calico--kube--controllers--7d4f97847b--lrvs5-" Jan 15 00:32:17.551673 containerd[1616]: 2026-01-15 00:32:17.133 [INFO][4254] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2a40b460fec1a559eefa0ce9fc09e205ce706b2050fe9c6bed047d22ae47c914" Namespace="calico-system" Pod="calico-kube-controllers-7d4f97847b-lrvs5" WorkloadEndpoint="ci--4515.1.0--n--4ecc98c3fd-k8s-calico--kube--controllers--7d4f97847b--lrvs5-eth0" Jan 15 00:32:17.551673 containerd[1616]: 2026-01-15 00:32:17.277 [INFO][4278] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2a40b460fec1a559eefa0ce9fc09e205ce706b2050fe9c6bed047d22ae47c914" HandleID="k8s-pod-network.2a40b460fec1a559eefa0ce9fc09e205ce706b2050fe9c6bed047d22ae47c914" Workload="ci--4515.1.0--n--4ecc98c3fd-k8s-calico--kube--controllers--7d4f97847b--lrvs5-eth0" Jan 15 00:32:17.551954 containerd[1616]: 2026-01-15 00:32:17.280 [INFO][4278] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2a40b460fec1a559eefa0ce9fc09e205ce706b2050fe9c6bed047d22ae47c914" HandleID="k8s-pod-network.2a40b460fec1a559eefa0ce9fc09e205ce706b2050fe9c6bed047d22ae47c914" Workload="ci--4515.1.0--n--4ecc98c3fd-k8s-calico--kube--controllers--7d4f97847b--lrvs5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000317930), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515.1.0-n-4ecc98c3fd", "pod":"calico-kube-controllers-7d4f97847b-lrvs5", "timestamp":"2026-01-15 00:32:17.277037956 +0000 UTC"}, Hostname:"ci-4515.1.0-n-4ecc98c3fd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 00:32:17.551954 containerd[1616]: 2026-01-15 00:32:17.280 [INFO][4278] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 15 00:32:17.551954 containerd[1616]: 2026-01-15 00:32:17.344 [INFO][4278] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 15 00:32:17.551954 containerd[1616]: 2026-01-15 00:32:17.345 [INFO][4278] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-n-4ecc98c3fd' Jan 15 00:32:17.551954 containerd[1616]: 2026-01-15 00:32:17.382 [INFO][4278] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2a40b460fec1a559eefa0ce9fc09e205ce706b2050fe9c6bed047d22ae47c914" host="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:32:17.551954 containerd[1616]: 2026-01-15 00:32:17.398 [INFO][4278] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:32:17.551954 containerd[1616]: 2026-01-15 00:32:17.408 [INFO][4278] ipam/ipam.go 511: Trying affinity for 192.168.6.0/26 host="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:32:17.551954 containerd[1616]: 2026-01-15 00:32:17.418 [INFO][4278] ipam/ipam.go 158: Attempting to load block cidr=192.168.6.0/26 host="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:32:17.551954 containerd[1616]: 2026-01-15 00:32:17.424 [INFO][4278] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.6.0/26 host="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:32:17.552828 containerd[1616]: 2026-01-15 00:32:17.424 [INFO][4278] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.6.0/26 handle="k8s-pod-network.2a40b460fec1a559eefa0ce9fc09e205ce706b2050fe9c6bed047d22ae47c914" host="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:32:17.552828 containerd[1616]: 2026-01-15 00:32:17.428 [INFO][4278] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2a40b460fec1a559eefa0ce9fc09e205ce706b2050fe9c6bed047d22ae47c914 Jan 15 00:32:17.552828 containerd[1616]: 2026-01-15 00:32:17.440 [INFO][4278] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.6.0/26 handle="k8s-pod-network.2a40b460fec1a559eefa0ce9fc09e205ce706b2050fe9c6bed047d22ae47c914" host="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:32:17.552828 containerd[1616]: 2026-01-15 00:32:17.459 [INFO][4278] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.6.4/26] block=192.168.6.0/26 handle="k8s-pod-network.2a40b460fec1a559eefa0ce9fc09e205ce706b2050fe9c6bed047d22ae47c914" host="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:32:17.552828 containerd[1616]: 2026-01-15 00:32:17.461 [INFO][4278] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.6.4/26] handle="k8s-pod-network.2a40b460fec1a559eefa0ce9fc09e205ce706b2050fe9c6bed047d22ae47c914" host="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:32:17.552828 containerd[1616]: 2026-01-15 00:32:17.461 [INFO][4278] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 15 00:32:17.552828 containerd[1616]: 2026-01-15 00:32:17.461 [INFO][4278] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.6.4/26] IPv6=[] ContainerID="2a40b460fec1a559eefa0ce9fc09e205ce706b2050fe9c6bed047d22ae47c914" HandleID="k8s-pod-network.2a40b460fec1a559eefa0ce9fc09e205ce706b2050fe9c6bed047d22ae47c914" Workload="ci--4515.1.0--n--4ecc98c3fd-k8s-calico--kube--controllers--7d4f97847b--lrvs5-eth0" Jan 15 00:32:17.553699 containerd[1616]: 2026-01-15 00:32:17.477 [INFO][4254] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2a40b460fec1a559eefa0ce9fc09e205ce706b2050fe9c6bed047d22ae47c914" Namespace="calico-system" Pod="calico-kube-controllers-7d4f97847b-lrvs5" WorkloadEndpoint="ci--4515.1.0--n--4ecc98c3fd-k8s-calico--kube--controllers--7d4f97847b--lrvs5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--n--4ecc98c3fd-k8s-calico--kube--controllers--7d4f97847b--lrvs5-eth0", GenerateName:"calico-kube-controllers-7d4f97847b-", Namespace:"calico-system", SelfLink:"", UID:"5adbdfdd-96a2-41eb-8663-7460bd3865b9", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 0, 31, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d4f97847b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-n-4ecc98c3fd", ContainerID:"", Pod:"calico-kube-controllers-7d4f97847b-lrvs5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.6.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali60c829703c8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 00:32:17.553785 containerd[1616]: 2026-01-15 00:32:17.477 [INFO][4254] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.6.4/32] ContainerID="2a40b460fec1a559eefa0ce9fc09e205ce706b2050fe9c6bed047d22ae47c914" Namespace="calico-system" Pod="calico-kube-controllers-7d4f97847b-lrvs5" WorkloadEndpoint="ci--4515.1.0--n--4ecc98c3fd-k8s-calico--kube--controllers--7d4f97847b--lrvs5-eth0" Jan 15 00:32:17.553785 containerd[1616]: 2026-01-15 00:32:17.477 [INFO][4254] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60c829703c8 ContainerID="2a40b460fec1a559eefa0ce9fc09e205ce706b2050fe9c6bed047d22ae47c914" Namespace="calico-system" Pod="calico-kube-controllers-7d4f97847b-lrvs5" WorkloadEndpoint="ci--4515.1.0--n--4ecc98c3fd-k8s-calico--kube--controllers--7d4f97847b--lrvs5-eth0" Jan 15 00:32:17.553785 containerd[1616]: 2026-01-15 00:32:17.507 [INFO][4254] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2a40b460fec1a559eefa0ce9fc09e205ce706b2050fe9c6bed047d22ae47c914" Namespace="calico-system" Pod="calico-kube-controllers-7d4f97847b-lrvs5" WorkloadEndpoint="ci--4515.1.0--n--4ecc98c3fd-k8s-calico--kube--controllers--7d4f97847b--lrvs5-eth0" Jan 15 00:32:17.553861 containerd[1616]: 2026-01-15 00:32:17.514 [INFO][4254] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2a40b460fec1a559eefa0ce9fc09e205ce706b2050fe9c6bed047d22ae47c914" Namespace="calico-system" Pod="calico-kube-controllers-7d4f97847b-lrvs5" WorkloadEndpoint="ci--4515.1.0--n--4ecc98c3fd-k8s-calico--kube--controllers--7d4f97847b--lrvs5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--n--4ecc98c3fd-k8s-calico--kube--controllers--7d4f97847b--lrvs5-eth0", GenerateName:"calico-kube-controllers-7d4f97847b-", Namespace:"calico-system", SelfLink:"", UID:"5adbdfdd-96a2-41eb-8663-7460bd3865b9", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 0, 31, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d4f97847b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-n-4ecc98c3fd", ContainerID:"2a40b460fec1a559eefa0ce9fc09e205ce706b2050fe9c6bed047d22ae47c914", Pod:"calico-kube-controllers-7d4f97847b-lrvs5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.6.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali60c829703c8", MAC:"22:ec:7b:dd:51:b5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 00:32:17.553921 containerd[1616]: 2026-01-15 00:32:17.532 [INFO][4254] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2a40b460fec1a559eefa0ce9fc09e205ce706b2050fe9c6bed047d22ae47c914" Namespace="calico-system" Pod="calico-kube-controllers-7d4f97847b-lrvs5" WorkloadEndpoint="ci--4515.1.0--n--4ecc98c3fd-k8s-calico--kube--controllers--7d4f97847b--lrvs5-eth0" Jan 15 00:32:17.567968 systemd[1]: Started cri-containerd-5e2404c2a290edecd02ffd86b2718c16d8aa8a56fa550fe3311011453a6dc346.scope - libcontainer container 5e2404c2a290edecd02ffd86b2718c16d8aa8a56fa550fe3311011453a6dc346. Jan 15 00:32:17.623000 audit[4360]: NETFILTER_CFG table=filter:129 family=2 entries=44 op=nft_register_chain pid=4360 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 15 00:32:17.631043 kernel: audit: type=1325 audit(1768437137.623:671): table=filter:129 family=2 entries=44 op=nft_register_chain pid=4360 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 15 00:32:17.623000 audit[4360]: SYSCALL arch=c000003e syscall=46 success=yes exit=21952 a0=3 a1=7ffd888ed150 a2=0 a3=7ffd888ed13c items=0 ppid=3907 pid=4360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:17.672066 kernel: audit: type=1300 audit(1768437137.623:671): arch=c000003e syscall=46 success=yes exit=21952 a0=3 a1=7ffd888ed150 a2=0 a3=7ffd888ed13c items=0 ppid=3907 pid=4360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:17.623000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 15 00:32:17.679221 kernel: audit: type=1327 audit(1768437137.623:671): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 15 00:32:17.679348 kernel: audit: type=1334 audit(1768437137.666:672): prog-id=216 op=LOAD Jan 15 00:32:17.679370 kernel: audit: type=1334 audit(1768437137.666:673): prog-id=217 op=LOAD Jan 15 00:32:17.666000 audit: BPF prog-id=216 op=LOAD Jan 15 00:32:17.666000 audit: BPF prog-id=217 op=LOAD Jan 15 00:32:17.682304 kernel: audit: type=1300 audit(1768437137.666:673): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=4324 pid=4338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:17.666000 audit[4338]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=4324 pid=4338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:17.666000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565323430346332613239306564656364303266666438366232373138 Jan 15 00:32:17.691815 kernel: audit: type=1327 audit(1768437137.666:673): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565323430346332613239306564656364303266666438366232373138 Jan 15 00:32:17.666000 audit: BPF prog-id=217 op=UNLOAD Jan 15 00:32:17.666000 audit[4338]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4324 pid=4338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:17.666000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565323430346332613239306564656364303266666438366232373138 Jan 15 00:32:17.666000 audit: BPF prog-id=218 op=LOAD Jan 15 00:32:17.666000 audit[4338]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=4324 pid=4338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:17.666000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565323430346332613239306564656364303266666438366232373138 Jan 15 00:32:17.666000 audit: BPF prog-id=219 op=LOAD Jan 15 00:32:17.666000 audit[4338]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=4324 pid=4338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:17.666000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565323430346332613239306564656364303266666438366232373138 Jan 15 00:32:17.666000 audit: BPF prog-id=219 op=UNLOAD Jan 15 00:32:17.666000 audit[4338]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4324 pid=4338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:17.666000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565323430346332613239306564656364303266666438366232373138 Jan 15 00:32:17.666000 audit: BPF prog-id=218 op=UNLOAD Jan 15 00:32:17.666000 audit[4338]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4324 pid=4338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:17.666000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565323430346332613239306564656364303266666438366232373138 Jan 15 00:32:17.666000 audit: BPF prog-id=220 op=LOAD Jan 15 00:32:17.666000 audit[4338]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=4324 pid=4338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:17.666000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565323430346332613239306564656364303266666438366232373138 Jan 15 00:32:17.701039 systemd-networkd[1518]: calicd11e4bda5c: Link UP Jan 15 00:32:17.704373 systemd-networkd[1518]: calicd11e4bda5c: Gained carrier Jan 15 00:32:17.738321 containerd[1616]: 2026-01-15 00:32:17.222 [INFO][4245] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--n--4ecc98c3fd-k8s-coredns--668d6bf9bc--cxd6l-eth0 coredns-668d6bf9bc- kube-system bd9fdd13-b944-49f0-8efe-4c6c4031a849 889 0 2026-01-15 00:31:34 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4515.1.0-n-4ecc98c3fd coredns-668d6bf9bc-cxd6l eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calicd11e4bda5c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="61cfe793d5ef8a9a4be27ef8c1709496c7c6b876a93214c963f86f3f522c7ace" Namespace="kube-system" Pod="coredns-668d6bf9bc-cxd6l" WorkloadEndpoint="ci--4515.1.0--n--4ecc98c3fd-k8s-coredns--668d6bf9bc--cxd6l-" Jan 15 00:32:17.738321 containerd[1616]: 2026-01-15 00:32:17.223 [INFO][4245] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="61cfe793d5ef8a9a4be27ef8c1709496c7c6b876a93214c963f86f3f522c7ace" Namespace="kube-system" Pod="coredns-668d6bf9bc-cxd6l" WorkloadEndpoint="ci--4515.1.0--n--4ecc98c3fd-k8s-coredns--668d6bf9bc--cxd6l-eth0" Jan 15 00:32:17.738321 containerd[1616]: 2026-01-15 00:32:17.302 [INFO][4292] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="61cfe793d5ef8a9a4be27ef8c1709496c7c6b876a93214c963f86f3f522c7ace" HandleID="k8s-pod-network.61cfe793d5ef8a9a4be27ef8c1709496c7c6b876a93214c963f86f3f522c7ace" Workload="ci--4515.1.0--n--4ecc98c3fd-k8s-coredns--668d6bf9bc--cxd6l-eth0" Jan 15 00:32:17.738624 containerd[1616]: 2026-01-15 00:32:17.303 [INFO][4292] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="61cfe793d5ef8a9a4be27ef8c1709496c7c6b876a93214c963f86f3f522c7ace" HandleID="k8s-pod-network.61cfe793d5ef8a9a4be27ef8c1709496c7c6b876a93214c963f86f3f522c7ace" Workload="ci--4515.1.0--n--4ecc98c3fd-k8s-coredns--668d6bf9bc--cxd6l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000332bf0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4515.1.0-n-4ecc98c3fd", "pod":"coredns-668d6bf9bc-cxd6l", "timestamp":"2026-01-15 00:32:17.302796034 +0000 UTC"}, Hostname:"ci-4515.1.0-n-4ecc98c3fd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 00:32:17.738624 containerd[1616]: 2026-01-15 00:32:17.303 [INFO][4292] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 15 00:32:17.738624 containerd[1616]: 2026-01-15 00:32:17.461 [INFO][4292] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 15 00:32:17.738624 containerd[1616]: 2026-01-15 00:32:17.469 [INFO][4292] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-n-4ecc98c3fd' Jan 15 00:32:17.738624 containerd[1616]: 2026-01-15 00:32:17.498 [INFO][4292] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.61cfe793d5ef8a9a4be27ef8c1709496c7c6b876a93214c963f86f3f522c7ace" host="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:32:17.738624 containerd[1616]: 2026-01-15 00:32:17.518 [INFO][4292] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:32:17.738624 containerd[1616]: 2026-01-15 00:32:17.540 [INFO][4292] ipam/ipam.go 511: Trying affinity for 192.168.6.0/26 host="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:32:17.738624 containerd[1616]: 2026-01-15 00:32:17.546 [INFO][4292] ipam/ipam.go 158: Attempting to load block cidr=192.168.6.0/26 host="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:32:17.738624 containerd[1616]: 2026-01-15 00:32:17.552 [INFO][4292] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.6.0/26 host="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:32:17.739245 containerd[1616]: 2026-01-15 00:32:17.552 [INFO][4292] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.6.0/26 handle="k8s-pod-network.61cfe793d5ef8a9a4be27ef8c1709496c7c6b876a93214c963f86f3f522c7ace" host="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:32:17.739245 containerd[1616]: 2026-01-15 00:32:17.557 [INFO][4292] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.61cfe793d5ef8a9a4be27ef8c1709496c7c6b876a93214c963f86f3f522c7ace Jan 15 00:32:17.739245 containerd[1616]: 2026-01-15 00:32:17.575 [INFO][4292] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.6.0/26 handle="k8s-pod-network.61cfe793d5ef8a9a4be27ef8c1709496c7c6b876a93214c963f86f3f522c7ace" host="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:32:17.739245 containerd[1616]: 2026-01-15 00:32:17.621 [INFO][4292] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.6.5/26] block=192.168.6.0/26 handle="k8s-pod-network.61cfe793d5ef8a9a4be27ef8c1709496c7c6b876a93214c963f86f3f522c7ace" host="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:32:17.739245 containerd[1616]: 2026-01-15 00:32:17.622 [INFO][4292] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.6.5/26] handle="k8s-pod-network.61cfe793d5ef8a9a4be27ef8c1709496c7c6b876a93214c963f86f3f522c7ace" host="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:32:17.739245 containerd[1616]: 2026-01-15 00:32:17.623 [INFO][4292] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 15 00:32:17.739245 containerd[1616]: 2026-01-15 00:32:17.623 [INFO][4292] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.6.5/26] IPv6=[] ContainerID="61cfe793d5ef8a9a4be27ef8c1709496c7c6b876a93214c963f86f3f522c7ace" HandleID="k8s-pod-network.61cfe793d5ef8a9a4be27ef8c1709496c7c6b876a93214c963f86f3f522c7ace" Workload="ci--4515.1.0--n--4ecc98c3fd-k8s-coredns--668d6bf9bc--cxd6l-eth0" Jan 15 00:32:17.739440 containerd[1616]: 2026-01-15 00:32:17.673 [INFO][4245] cni-plugin/k8s.go 418: Populated endpoint ContainerID="61cfe793d5ef8a9a4be27ef8c1709496c7c6b876a93214c963f86f3f522c7ace" Namespace="kube-system" Pod="coredns-668d6bf9bc-cxd6l" WorkloadEndpoint="ci--4515.1.0--n--4ecc98c3fd-k8s-coredns--668d6bf9bc--cxd6l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--n--4ecc98c3fd-k8s-coredns--668d6bf9bc--cxd6l-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"bd9fdd13-b944-49f0-8efe-4c6c4031a849", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 0, 31, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-n-4ecc98c3fd", ContainerID:"", Pod:"coredns-668d6bf9bc-cxd6l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.6.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicd11e4bda5c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 00:32:17.739440 containerd[1616]: 2026-01-15 00:32:17.673 [INFO][4245] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.6.5/32] ContainerID="61cfe793d5ef8a9a4be27ef8c1709496c7c6b876a93214c963f86f3f522c7ace" Namespace="kube-system" Pod="coredns-668d6bf9bc-cxd6l" WorkloadEndpoint="ci--4515.1.0--n--4ecc98c3fd-k8s-coredns--668d6bf9bc--cxd6l-eth0" Jan 15 00:32:17.739440 containerd[1616]: 2026-01-15 00:32:17.673 [INFO][4245] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicd11e4bda5c ContainerID="61cfe793d5ef8a9a4be27ef8c1709496c7c6b876a93214c963f86f3f522c7ace" Namespace="kube-system" Pod="coredns-668d6bf9bc-cxd6l" WorkloadEndpoint="ci--4515.1.0--n--4ecc98c3fd-k8s-coredns--668d6bf9bc--cxd6l-eth0" Jan 15 00:32:17.739440 containerd[1616]: 2026-01-15 00:32:17.699 [INFO][4245] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="61cfe793d5ef8a9a4be27ef8c1709496c7c6b876a93214c963f86f3f522c7ace" Namespace="kube-system" Pod="coredns-668d6bf9bc-cxd6l" WorkloadEndpoint="ci--4515.1.0--n--4ecc98c3fd-k8s-coredns--668d6bf9bc--cxd6l-eth0" Jan 15 00:32:17.739440 containerd[1616]: 2026-01-15 00:32:17.700 [INFO][4245] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="61cfe793d5ef8a9a4be27ef8c1709496c7c6b876a93214c963f86f3f522c7ace" Namespace="kube-system" Pod="coredns-668d6bf9bc-cxd6l" WorkloadEndpoint="ci--4515.1.0--n--4ecc98c3fd-k8s-coredns--668d6bf9bc--cxd6l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--n--4ecc98c3fd-k8s-coredns--668d6bf9bc--cxd6l-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"bd9fdd13-b944-49f0-8efe-4c6c4031a849", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 0, 31, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-n-4ecc98c3fd", ContainerID:"61cfe793d5ef8a9a4be27ef8c1709496c7c6b876a93214c963f86f3f522c7ace", Pod:"coredns-668d6bf9bc-cxd6l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.6.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicd11e4bda5c", MAC:"12:b7:23:fe:8b:e8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 00:32:17.739440 containerd[1616]: 2026-01-15 00:32:17.724 [INFO][4245] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="61cfe793d5ef8a9a4be27ef8c1709496c7c6b876a93214c963f86f3f522c7ace" Namespace="kube-system" Pod="coredns-668d6bf9bc-cxd6l" WorkloadEndpoint="ci--4515.1.0--n--4ecc98c3fd-k8s-coredns--668d6bf9bc--cxd6l-eth0" Jan 15 00:32:17.796369 containerd[1616]: time="2026-01-15T00:32:17.796303829Z" level=info msg="connecting to shim 2a40b460fec1a559eefa0ce9fc09e205ce706b2050fe9c6bed047d22ae47c914" address="unix:///run/containerd/s/5f73917af652958fdc8cdb6417c308635bc0a7932d7770f7d48e1e92616212d7" namespace=k8s.io protocol=ttrpc version=3 Jan 15 00:32:17.801113 containerd[1616]: time="2026-01-15T00:32:17.800608359Z" level=info msg="connecting to shim 61cfe793d5ef8a9a4be27ef8c1709496c7c6b876a93214c963f86f3f522c7ace" address="unix:///run/containerd/s/0dc40831168bbc7442f0cdb82317f76e3ac2607712674c16da364af6ce594fe4" namespace=k8s.io protocol=ttrpc version=3 Jan 15 00:32:17.862723 systemd-networkd[1518]: cali3151784e5bc: Link UP Jan 15 00:32:17.865941 systemd-networkd[1518]: cali3151784e5bc: Gained carrier Jan 15 00:32:17.908007 containerd[1616]: 2026-01-15 00:32:17.231 [INFO][4242] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--n--4ecc98c3fd-k8s-coredns--668d6bf9bc--n8h7v-eth0 coredns-668d6bf9bc- kube-system 5e2ebb1c-cdf8-4c57-934e-7ae859fc7427 883 0 2026-01-15 00:31:34 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4515.1.0-n-4ecc98c3fd coredns-668d6bf9bc-n8h7v eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3151784e5bc [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="5d108976a296615d6bb62f4a396e59543deb0725c92c919b4d7f47187b655e1b" Namespace="kube-system" Pod="coredns-668d6bf9bc-n8h7v" WorkloadEndpoint="ci--4515.1.0--n--4ecc98c3fd-k8s-coredns--668d6bf9bc--n8h7v-" Jan 15 00:32:17.908007 containerd[1616]: 2026-01-15 00:32:17.232 [INFO][4242] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5d108976a296615d6bb62f4a396e59543deb0725c92c919b4d7f47187b655e1b" Namespace="kube-system" Pod="coredns-668d6bf9bc-n8h7v" WorkloadEndpoint="ci--4515.1.0--n--4ecc98c3fd-k8s-coredns--668d6bf9bc--n8h7v-eth0" Jan 15 00:32:17.908007 containerd[1616]: 2026-01-15 00:32:17.324 [INFO][4298] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5d108976a296615d6bb62f4a396e59543deb0725c92c919b4d7f47187b655e1b" HandleID="k8s-pod-network.5d108976a296615d6bb62f4a396e59543deb0725c92c919b4d7f47187b655e1b" Workload="ci--4515.1.0--n--4ecc98c3fd-k8s-coredns--668d6bf9bc--n8h7v-eth0" Jan 15 00:32:17.908007 containerd[1616]: 2026-01-15 00:32:17.325 [INFO][4298] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5d108976a296615d6bb62f4a396e59543deb0725c92c919b4d7f47187b655e1b" HandleID="k8s-pod-network.5d108976a296615d6bb62f4a396e59543deb0725c92c919b4d7f47187b655e1b" Workload="ci--4515.1.0--n--4ecc98c3fd-k8s-coredns--668d6bf9bc--n8h7v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f950), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4515.1.0-n-4ecc98c3fd", "pod":"coredns-668d6bf9bc-n8h7v", "timestamp":"2026-01-15 00:32:17.324510975 +0000 UTC"}, Hostname:"ci-4515.1.0-n-4ecc98c3fd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 00:32:17.908007 containerd[1616]: 2026-01-15 00:32:17.325 [INFO][4298] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 15 00:32:17.908007 containerd[1616]: 2026-01-15 00:32:17.625 [INFO][4298] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 15 00:32:17.908007 containerd[1616]: 2026-01-15 00:32:17.627 [INFO][4298] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-n-4ecc98c3fd' Jan 15 00:32:17.908007 containerd[1616]: 2026-01-15 00:32:17.686 [INFO][4298] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5d108976a296615d6bb62f4a396e59543deb0725c92c919b4d7f47187b655e1b" host="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:32:17.908007 containerd[1616]: 2026-01-15 00:32:17.712 [INFO][4298] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:32:17.908007 containerd[1616]: 2026-01-15 00:32:17.728 [INFO][4298] ipam/ipam.go 511: Trying affinity for 192.168.6.0/26 host="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:32:17.908007 containerd[1616]: 2026-01-15 00:32:17.733 [INFO][4298] ipam/ipam.go 158: Attempting to load block cidr=192.168.6.0/26 host="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:32:17.908007 containerd[1616]: 2026-01-15 00:32:17.747 [INFO][4298] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.6.0/26 host="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:32:17.908007 containerd[1616]: 2026-01-15 00:32:17.750 [INFO][4298] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.6.0/26 handle="k8s-pod-network.5d108976a296615d6bb62f4a396e59543deb0725c92c919b4d7f47187b655e1b" host="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:32:17.908007 containerd[1616]: 2026-01-15 00:32:17.759 [INFO][4298] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5d108976a296615d6bb62f4a396e59543deb0725c92c919b4d7f47187b655e1b Jan 15 00:32:17.908007 containerd[1616]: 2026-01-15 00:32:17.783 [INFO][4298] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.6.0/26 handle="k8s-pod-network.5d108976a296615d6bb62f4a396e59543deb0725c92c919b4d7f47187b655e1b" host="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:32:17.908007 containerd[1616]: 2026-01-15 00:32:17.798 [INFO][4298] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.6.6/26] block=192.168.6.0/26 handle="k8s-pod-network.5d108976a296615d6bb62f4a396e59543deb0725c92c919b4d7f47187b655e1b" host="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:32:17.908007 containerd[1616]: 2026-01-15 00:32:17.798 [INFO][4298] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.6.6/26] handle="k8s-pod-network.5d108976a296615d6bb62f4a396e59543deb0725c92c919b4d7f47187b655e1b" host="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:32:17.908007 containerd[1616]: 2026-01-15 00:32:17.798 [INFO][4298] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 15 00:32:17.908007 containerd[1616]: 2026-01-15 00:32:17.798 [INFO][4298] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.6.6/26] IPv6=[] ContainerID="5d108976a296615d6bb62f4a396e59543deb0725c92c919b4d7f47187b655e1b" HandleID="k8s-pod-network.5d108976a296615d6bb62f4a396e59543deb0725c92c919b4d7f47187b655e1b" Workload="ci--4515.1.0--n--4ecc98c3fd-k8s-coredns--668d6bf9bc--n8h7v-eth0" Jan 15 00:32:17.912406 containerd[1616]: 2026-01-15 00:32:17.816 [INFO][4242] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5d108976a296615d6bb62f4a396e59543deb0725c92c919b4d7f47187b655e1b" Namespace="kube-system" Pod="coredns-668d6bf9bc-n8h7v" WorkloadEndpoint="ci--4515.1.0--n--4ecc98c3fd-k8s-coredns--668d6bf9bc--n8h7v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--n--4ecc98c3fd-k8s-coredns--668d6bf9bc--n8h7v-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5e2ebb1c-cdf8-4c57-934e-7ae859fc7427", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 0, 31, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-n-4ecc98c3fd", ContainerID:"", Pod:"coredns-668d6bf9bc-n8h7v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.6.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3151784e5bc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 00:32:17.912406 containerd[1616]: 2026-01-15 00:32:17.824 [INFO][4242] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.6.6/32] ContainerID="5d108976a296615d6bb62f4a396e59543deb0725c92c919b4d7f47187b655e1b" Namespace="kube-system" Pod="coredns-668d6bf9bc-n8h7v" WorkloadEndpoint="ci--4515.1.0--n--4ecc98c3fd-k8s-coredns--668d6bf9bc--n8h7v-eth0" Jan 15 00:32:17.912406 containerd[1616]: 2026-01-15 00:32:17.824 [INFO][4242] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3151784e5bc ContainerID="5d108976a296615d6bb62f4a396e59543deb0725c92c919b4d7f47187b655e1b" Namespace="kube-system" Pod="coredns-668d6bf9bc-n8h7v" WorkloadEndpoint="ci--4515.1.0--n--4ecc98c3fd-k8s-coredns--668d6bf9bc--n8h7v-eth0" Jan 15 00:32:17.912406 containerd[1616]: 2026-01-15 00:32:17.863 [INFO][4242] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5d108976a296615d6bb62f4a396e59543deb0725c92c919b4d7f47187b655e1b" Namespace="kube-system" Pod="coredns-668d6bf9bc-n8h7v" WorkloadEndpoint="ci--4515.1.0--n--4ecc98c3fd-k8s-coredns--668d6bf9bc--n8h7v-eth0" Jan 15 00:32:17.912406 containerd[1616]: 2026-01-15 00:32:17.870 [INFO][4242] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5d108976a296615d6bb62f4a396e59543deb0725c92c919b4d7f47187b655e1b" Namespace="kube-system" Pod="coredns-668d6bf9bc-n8h7v" WorkloadEndpoint="ci--4515.1.0--n--4ecc98c3fd-k8s-coredns--668d6bf9bc--n8h7v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--n--4ecc98c3fd-k8s-coredns--668d6bf9bc--n8h7v-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5e2ebb1c-cdf8-4c57-934e-7ae859fc7427", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 0, 31, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-n-4ecc98c3fd", ContainerID:"5d108976a296615d6bb62f4a396e59543deb0725c92c919b4d7f47187b655e1b", Pod:"coredns-668d6bf9bc-n8h7v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.6.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3151784e5bc", MAC:"fe:5a:57:c5:ac:fd", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 00:32:17.912406 containerd[1616]: 2026-01-15 00:32:17.891 [INFO][4242] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5d108976a296615d6bb62f4a396e59543deb0725c92c919b4d7f47187b655e1b" Namespace="kube-system" Pod="coredns-668d6bf9bc-n8h7v" WorkloadEndpoint="ci--4515.1.0--n--4ecc98c3fd-k8s-coredns--668d6bf9bc--n8h7v-eth0" Jan 15 00:32:17.919363 systemd[1]: Started cri-containerd-61cfe793d5ef8a9a4be27ef8c1709496c7c6b876a93214c963f86f3f522c7ace.scope - libcontainer container 61cfe793d5ef8a9a4be27ef8c1709496c7c6b876a93214c963f86f3f522c7ace. Jan 15 00:32:17.921000 audit[4434]: NETFILTER_CFG table=filter:130 family=2 entries=60 op=nft_register_chain pid=4434 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 15 00:32:17.921000 audit[4434]: SYSCALL arch=c000003e syscall=46 success=yes exit=28968 a0=3 a1=7ffd60503e20 a2=0 a3=7ffd60503e0c items=0 ppid=3907 pid=4434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:17.921000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 15 00:32:17.938627 containerd[1616]: time="2026-01-15T00:32:17.938418956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-586c796f68-gf7fx,Uid:b432d05d-ed71-4758-b9af-7738bf34afb7,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"5e2404c2a290edecd02ffd86b2718c16d8aa8a56fa550fe3311011453a6dc346\"" Jan 15 00:32:17.950778 containerd[1616]: time="2026-01-15T00:32:17.950610100Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 15 00:32:17.969572 systemd[1]: Started cri-containerd-2a40b460fec1a559eefa0ce9fc09e205ce706b2050fe9c6bed047d22ae47c914.scope - libcontainer container 2a40b460fec1a559eefa0ce9fc09e205ce706b2050fe9c6bed047d22ae47c914. Jan 15 00:32:17.983000 audit: BPF prog-id=221 op=LOAD Jan 15 00:32:17.985000 audit: BPF prog-id=222 op=LOAD Jan 15 00:32:17.985000 audit[4410]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=4393 pid=4410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:17.985000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631636665373933643565663861396134626532376566386331373039 Jan 15 00:32:17.987000 audit: BPF prog-id=222 op=UNLOAD Jan 15 00:32:17.987000 audit[4410]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4393 pid=4410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:17.987000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631636665373933643565663861396134626532376566386331373039 Jan 15 00:32:17.989323 containerd[1616]: time="2026-01-15T00:32:17.988624753Z" level=info msg="connecting to shim 5d108976a296615d6bb62f4a396e59543deb0725c92c919b4d7f47187b655e1b" address="unix:///run/containerd/s/d9b9ed39e495d147c16a463c5e2f1c74a106e4ef11ddec48eaaaa29ebff7ba19" namespace=k8s.io protocol=ttrpc version=3 Jan 15 00:32:17.988000 audit: BPF prog-id=223 op=LOAD Jan 15 00:32:17.990066 containerd[1616]: time="2026-01-15T00:32:17.989284751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-fmmn9,Uid:a6d2aaa6-9d35-4f8a-99b8-b75c10539cd4,Namespace:calico-system,Attempt:0,}" Jan 15 00:32:17.988000 audit[4410]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=4393 pid=4410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:17.988000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631636665373933643565663861396134626532376566386331373039 Jan 15 00:32:17.989000 audit: BPF prog-id=224 op=LOAD Jan 15 00:32:17.989000 audit[4410]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=4393 pid=4410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:17.989000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631636665373933643565663861396134626532376566386331373039 Jan 15 00:32:17.989000 audit: BPF prog-id=224 op=UNLOAD Jan 15 00:32:17.989000 audit[4410]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4393 pid=4410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:17.989000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631636665373933643565663861396134626532376566386331373039 Jan 15 00:32:17.990000 audit: BPF prog-id=223 op=UNLOAD Jan 15 00:32:17.990000 audit[4410]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4393 pid=4410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:17.990000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631636665373933643565663861396134626532376566386331373039 Jan 15 00:32:17.991000 audit: BPF prog-id=225 op=LOAD Jan 15 00:32:17.991000 audit[4410]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=4393 pid=4410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:17.991000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631636665373933643565663861396134626532376566386331373039 Jan 15 00:32:18.101000 audit[4503]: NETFILTER_CFG table=filter:131 family=2 entries=44 op=nft_register_chain pid=4503 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 15 00:32:18.101000 audit[4503]: SYSCALL arch=c000003e syscall=46 success=yes exit=21516 a0=3 a1=7ffc2a617a60 a2=0 a3=7ffc2a617a4c items=0 ppid=3907 pid=4503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:18.101000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 15 00:32:18.113287 systemd-networkd[1518]: caliccbc21f5f7c: Gained IPv6LL Jan 15 00:32:18.137970 systemd[1]: Started cri-containerd-5d108976a296615d6bb62f4a396e59543deb0725c92c919b4d7f47187b655e1b.scope - libcontainer container 5d108976a296615d6bb62f4a396e59543deb0725c92c919b4d7f47187b655e1b. Jan 15 00:32:18.141000 audit: BPF prog-id=226 op=LOAD Jan 15 00:32:18.152000 audit: BPF prog-id=227 op=LOAD Jan 15 00:32:18.152000 audit[4424]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000242238 a2=98 a3=0 items=0 ppid=4389 pid=4424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:18.152000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261343062343630666563316135353965656661306365396663303965 Jan 15 00:32:18.158000 audit: BPF prog-id=227 op=UNLOAD Jan 15 00:32:18.158000 audit[4424]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4389 pid=4424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:18.158000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261343062343630666563316135353965656661306365396663303965 Jan 15 00:32:18.166000 audit: BPF prog-id=228 op=LOAD Jan 15 00:32:18.166000 audit[4424]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000242488 a2=98 a3=0 items=0 ppid=4389 pid=4424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:18.166000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261343062343630666563316135353965656661306365396663303965 Jan 15 00:32:18.166000 audit: BPF prog-id=229 op=LOAD Jan 15 00:32:18.166000 audit[4424]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000242218 a2=98 a3=0 items=0 ppid=4389 pid=4424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:18.166000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261343062343630666563316135353965656661306365396663303965 Jan 15 00:32:18.166000 audit: BPF prog-id=229 op=UNLOAD Jan 15 00:32:18.166000 audit[4424]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4389 pid=4424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:18.166000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261343062343630666563316135353965656661306365396663303965 Jan 15 00:32:18.166000 audit: BPF prog-id=228 op=UNLOAD Jan 15 00:32:18.166000 audit[4424]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4389 pid=4424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:18.166000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261343062343630666563316135353965656661306365396663303965 Jan 15 00:32:18.166000 audit: BPF prog-id=230 op=LOAD Jan 15 00:32:18.166000 audit[4424]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0002426e8 a2=98 a3=0 items=0 ppid=4389 pid=4424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:18.166000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261343062343630666563316135353965656661306365396663303965 Jan 15 00:32:18.190563 containerd[1616]: time="2026-01-15T00:32:18.190263621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cxd6l,Uid:bd9fdd13-b944-49f0-8efe-4c6c4031a849,Namespace:kube-system,Attempt:0,} returns sandbox id \"61cfe793d5ef8a9a4be27ef8c1709496c7c6b876a93214c963f86f3f522c7ace\"" Jan 15 00:32:18.192590 kubelet[2785]: E0115 00:32:18.192549 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 15 00:32:18.206992 containerd[1616]: time="2026-01-15T00:32:18.206637214Z" level=info msg="CreateContainer within sandbox \"61cfe793d5ef8a9a4be27ef8c1709496c7c6b876a93214c963f86f3f522c7ace\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 15 00:32:18.218000 audit: BPF prog-id=231 op=LOAD Jan 15 00:32:18.222000 audit: BPF prog-id=232 op=LOAD Jan 15 00:32:18.222000 audit[4496]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001e8238 a2=98 a3=0 items=0 ppid=4470 pid=4496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:18.222000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564313038393736613239363631356436626236326634613339366535 Jan 15 00:32:18.222000 audit: BPF prog-id=232 op=UNLOAD Jan 15 00:32:18.222000 audit[4496]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4470 pid=4496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:18.222000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564313038393736613239363631356436626236326634613339366535 Jan 15 00:32:18.222000 audit: BPF prog-id=233 op=LOAD Jan 15 00:32:18.222000 audit[4496]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001e8488 a2=98 a3=0 items=0 ppid=4470 pid=4496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:18.222000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564313038393736613239363631356436626236326634613339366535 Jan 15 00:32:18.222000 audit: BPF prog-id=234 op=LOAD Jan 15 00:32:18.222000 audit[4496]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001e8218 a2=98 a3=0 items=0 ppid=4470 pid=4496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:18.222000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564313038393736613239363631356436626236326634613339366535 Jan 15 00:32:18.223000 audit: BPF prog-id=234 op=UNLOAD Jan 15 00:32:18.223000 audit[4496]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4470 pid=4496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:18.223000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564313038393736613239363631356436626236326634613339366535 Jan 15 00:32:18.223000 audit: BPF prog-id=233 op=UNLOAD Jan 15 00:32:18.223000 audit[4496]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4470 pid=4496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:18.223000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564313038393736613239363631356436626236326634613339366535 Jan 15 00:32:18.223000 audit: BPF prog-id=235 op=LOAD Jan 15 00:32:18.223000 audit[4496]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001e86e8 a2=98 a3=0 items=0 ppid=4470 pid=4496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:18.223000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564313038393736613239363631356436626236326634613339366535 Jan 15 00:32:18.267995 containerd[1616]: time="2026-01-15T00:32:18.267905766Z" level=info msg="Container c0c9c169138ad80b6c86fb67c442d62a882747e892e4fc6760a1e6d359d88030: CDI devices from CRI Config.CDIDevices: []" Jan 15 00:32:18.268864 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3673755574.mount: Deactivated successfully. Jan 15 00:32:18.286218 containerd[1616]: time="2026-01-15T00:32:18.285680397Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:32:18.292042 containerd[1616]: time="2026-01-15T00:32:18.291674143Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 15 00:32:18.293233 containerd[1616]: time="2026-01-15T00:32:18.292176980Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 15 00:32:18.293681 kubelet[2785]: E0115 00:32:18.293614 2785 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 00:32:18.293931 kubelet[2785]: E0115 00:32:18.293854 2785 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 00:32:18.294907 kubelet[2785]: E0115 00:32:18.294719 2785 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tl2m7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-586c796f68-gf7fx_calico-apiserver(b432d05d-ed71-4758-b9af-7738bf34afb7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 15 00:32:18.297190 kubelet[2785]: E0115 00:32:18.297115 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-586c796f68-gf7fx" podUID="b432d05d-ed71-4758-b9af-7738bf34afb7" Jan 15 00:32:18.300368 containerd[1616]: time="2026-01-15T00:32:18.299905032Z" level=info msg="CreateContainer within sandbox \"61cfe793d5ef8a9a4be27ef8c1709496c7c6b876a93214c963f86f3f522c7ace\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c0c9c169138ad80b6c86fb67c442d62a882747e892e4fc6760a1e6d359d88030\"" Jan 15 00:32:18.302369 containerd[1616]: time="2026-01-15T00:32:18.302324459Z" level=info msg="StartContainer for \"c0c9c169138ad80b6c86fb67c442d62a882747e892e4fc6760a1e6d359d88030\"" Jan 15 00:32:18.316860 containerd[1616]: time="2026-01-15T00:32:18.316508000Z" level=info msg="connecting to shim c0c9c169138ad80b6c86fb67c442d62a882747e892e4fc6760a1e6d359d88030" address="unix:///run/containerd/s/0dc40831168bbc7442f0cdb82317f76e3ac2607712674c16da364af6ce594fe4" protocol=ttrpc version=3 Jan 15 00:32:18.385500 containerd[1616]: time="2026-01-15T00:32:18.385436990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d4f97847b-lrvs5,Uid:5adbdfdd-96a2-41eb-8663-7460bd3865b9,Namespace:calico-system,Attempt:0,} returns sandbox id \"2a40b460fec1a559eefa0ce9fc09e205ce706b2050fe9c6bed047d22ae47c914\"" Jan 15 00:32:18.390352 containerd[1616]: time="2026-01-15T00:32:18.390302416Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 15 00:32:18.391336 systemd[1]: Started cri-containerd-c0c9c169138ad80b6c86fb67c442d62a882747e892e4fc6760a1e6d359d88030.scope - libcontainer container c0c9c169138ad80b6c86fb67c442d62a882747e892e4fc6760a1e6d359d88030. Jan 15 00:32:18.435277 containerd[1616]: time="2026-01-15T00:32:18.434708670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-n8h7v,Uid:5e2ebb1c-cdf8-4c57-934e-7ae859fc7427,Namespace:kube-system,Attempt:0,} returns sandbox id \"5d108976a296615d6bb62f4a396e59543deb0725c92c919b4d7f47187b655e1b\"" Jan 15 00:32:18.438661 kubelet[2785]: E0115 00:32:18.438506 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 15 00:32:18.449838 containerd[1616]: time="2026-01-15T00:32:18.449674918Z" level=info msg="CreateContainer within sandbox \"5d108976a296615d6bb62f4a396e59543deb0725c92c919b4d7f47187b655e1b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 15 00:32:18.469000 audit: BPF prog-id=236 op=LOAD Jan 15 00:32:18.471000 audit: BPF prog-id=237 op=LOAD Jan 15 00:32:18.471000 audit[4535]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=4393 pid=4535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:18.471000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330633963313639313338616438306236633836666236376334343264 Jan 15 00:32:18.472000 audit: BPF prog-id=237 op=UNLOAD Jan 15 00:32:18.472000 audit[4535]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4393 pid=4535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:18.472000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330633963313639313338616438306236633836666236376334343264 Jan 15 00:32:18.473000 audit: BPF prog-id=238 op=LOAD Jan 15 00:32:18.473000 audit[4535]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=4393 pid=4535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:18.473000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330633963313639313338616438306236633836666236376334343264 Jan 15 00:32:18.473000 audit: BPF prog-id=239 op=LOAD Jan 15 00:32:18.473000 audit[4535]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=4393 pid=4535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:18.473000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330633963313639313338616438306236633836666236376334343264 Jan 15 00:32:18.474000 audit: BPF prog-id=239 op=UNLOAD Jan 15 00:32:18.474000 audit[4535]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4393 pid=4535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:18.474000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330633963313639313338616438306236633836666236376334343264 Jan 15 00:32:18.475000 audit: BPF prog-id=238 op=UNLOAD Jan 15 00:32:18.477113 containerd[1616]: time="2026-01-15T00:32:18.475616956Z" level=info msg="Container 215f7b93f7e6805232d39a618cc5154f805d9a945124ad4f173c7f49a79e9eb5: CDI devices from CRI Config.CDIDevices: []" Jan 15 00:32:18.475000 audit[4535]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4393 pid=4535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:18.475000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330633963313639313338616438306236633836666236376334343264 Jan 15 00:32:18.476000 audit: BPF prog-id=240 op=LOAD Jan 15 00:32:18.476000 audit[4535]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=4393 pid=4535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:18.476000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330633963313639313338616438306236633836666236376334343264 Jan 15 00:32:18.504802 kubelet[2785]: E0115 00:32:18.504717 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-586c796f68-gf7fx" podUID="b432d05d-ed71-4758-b9af-7738bf34afb7" Jan 15 00:32:18.512056 containerd[1616]: time="2026-01-15T00:32:18.511297597Z" level=info msg="CreateContainer within sandbox \"5d108976a296615d6bb62f4a396e59543deb0725c92c919b4d7f47187b655e1b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"215f7b93f7e6805232d39a618cc5154f805d9a945124ad4f173c7f49a79e9eb5\"" Jan 15 00:32:18.514262 containerd[1616]: time="2026-01-15T00:32:18.514146379Z" level=info msg="StartContainer for \"215f7b93f7e6805232d39a618cc5154f805d9a945124ad4f173c7f49a79e9eb5\"" Jan 15 00:32:18.524859 containerd[1616]: time="2026-01-15T00:32:18.524462345Z" level=info msg="connecting to shim 215f7b93f7e6805232d39a618cc5154f805d9a945124ad4f173c7f49a79e9eb5" address="unix:///run/containerd/s/d9b9ed39e495d147c16a463c5e2f1c74a106e4ef11ddec48eaaaa29ebff7ba19" protocol=ttrpc version=3 Jan 15 00:32:18.598173 containerd[1616]: time="2026-01-15T00:32:18.598112271Z" level=info msg="StartContainer for \"c0c9c169138ad80b6c86fb67c442d62a882747e892e4fc6760a1e6d359d88030\" returns successfully" Jan 15 00:32:18.618544 systemd[1]: Started cri-containerd-215f7b93f7e6805232d39a618cc5154f805d9a945124ad4f173c7f49a79e9eb5.scope - libcontainer container 215f7b93f7e6805232d39a618cc5154f805d9a945124ad4f173c7f49a79e9eb5. Jan 15 00:32:18.672000 audit: BPF prog-id=241 op=LOAD Jan 15 00:32:18.673000 audit: BPF prog-id=242 op=LOAD Jan 15 00:32:18.673000 audit[4570]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=4470 pid=4570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:18.673000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231356637623933663765363830353233326433396136313863633531 Jan 15 00:32:18.673000 audit: BPF prog-id=242 op=UNLOAD Jan 15 00:32:18.673000 audit[4570]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4470 pid=4570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:18.673000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231356637623933663765363830353233326433396136313863633531 Jan 15 00:32:18.674000 audit: BPF prog-id=243 op=LOAD Jan 15 00:32:18.674000 audit[4570]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=4470 pid=4570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:18.674000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231356637623933663765363830353233326433396136313863633531 Jan 15 00:32:18.674000 audit: BPF prog-id=244 op=LOAD Jan 15 00:32:18.674000 audit[4570]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=4470 pid=4570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:18.674000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231356637623933663765363830353233326433396136313863633531 Jan 15 00:32:18.674000 audit: BPF prog-id=244 op=UNLOAD Jan 15 00:32:18.674000 audit[4570]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4470 pid=4570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:18.674000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231356637623933663765363830353233326433396136313863633531 Jan 15 00:32:18.674000 audit: BPF prog-id=243 op=UNLOAD Jan 15 00:32:18.674000 audit[4570]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4470 pid=4570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:18.674000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231356637623933663765363830353233326433396136313863633531 Jan 15 00:32:18.674000 audit: BPF prog-id=245 op=LOAD Jan 15 00:32:18.674000 audit[4570]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=4470 pid=4570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:18.674000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231356637623933663765363830353233326433396136313863633531 Jan 15 00:32:18.683000 audit[4600]: NETFILTER_CFG table=filter:132 family=2 entries=20 op=nft_register_rule pid=4600 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:32:18.683000 audit[4600]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fffd3cbe1c0 a2=0 a3=7fffd3cbe1ac items=0 ppid=2937 pid=4600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:18.683000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:32:18.698837 systemd-networkd[1518]: cali23ca391d7f5: Link UP Jan 15 00:32:18.701000 audit[4600]: NETFILTER_CFG table=nat:133 family=2 entries=14 op=nft_register_rule pid=4600 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:32:18.702765 systemd-networkd[1518]: cali23ca391d7f5: Gained carrier Jan 15 00:32:18.701000 audit[4600]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fffd3cbe1c0 a2=0 a3=0 items=0 ppid=2937 pid=4600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:18.701000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:32:18.739056 containerd[1616]: time="2026-01-15T00:32:18.738938800Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:32:18.743994 containerd[1616]: time="2026-01-15T00:32:18.743671294Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 15 00:32:18.746552 containerd[1616]: time="2026-01-15T00:32:18.746168774Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 15 00:32:18.748272 kubelet[2785]: E0115 00:32:18.748161 2785 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 15 00:32:18.748272 kubelet[2785]: E0115 00:32:18.748232 2785 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 15 00:32:18.749309 kubelet[2785]: E0115 00:32:18.749225 2785 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ts2lb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7d4f97847b-lrvs5_calico-system(5adbdfdd-96a2-41eb-8663-7460bd3865b9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 15 00:32:18.751185 kubelet[2785]: E0115 00:32:18.751130 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d4f97847b-lrvs5" podUID="5adbdfdd-96a2-41eb-8663-7460bd3865b9" Jan 15 00:32:18.766866 containerd[1616]: 2026-01-15 00:32:18.243 [INFO][4482] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--n--4ecc98c3fd-k8s-goldmane--666569f655--fmmn9-eth0 goldmane-666569f655- calico-system a6d2aaa6-9d35-4f8a-99b8-b75c10539cd4 893 0 2026-01-15 00:31:50 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4515.1.0-n-4ecc98c3fd goldmane-666569f655-fmmn9 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali23ca391d7f5 [] [] }} ContainerID="8fb341b61cea6a64c4232aeb94524fce48e5e2149e029df370b60dba5db94b9d" Namespace="calico-system" Pod="goldmane-666569f655-fmmn9" WorkloadEndpoint="ci--4515.1.0--n--4ecc98c3fd-k8s-goldmane--666569f655--fmmn9-" Jan 15 00:32:18.766866 containerd[1616]: 2026-01-15 00:32:18.243 [INFO][4482] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8fb341b61cea6a64c4232aeb94524fce48e5e2149e029df370b60dba5db94b9d" Namespace="calico-system" Pod="goldmane-666569f655-fmmn9" WorkloadEndpoint="ci--4515.1.0--n--4ecc98c3fd-k8s-goldmane--666569f655--fmmn9-eth0" Jan 15 00:32:18.766866 containerd[1616]: 2026-01-15 00:32:18.542 [INFO][4529] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8fb341b61cea6a64c4232aeb94524fce48e5e2149e029df370b60dba5db94b9d" HandleID="k8s-pod-network.8fb341b61cea6a64c4232aeb94524fce48e5e2149e029df370b60dba5db94b9d" Workload="ci--4515.1.0--n--4ecc98c3fd-k8s-goldmane--666569f655--fmmn9-eth0" Jan 15 00:32:18.766866 containerd[1616]: 2026-01-15 00:32:18.545 [INFO][4529] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8fb341b61cea6a64c4232aeb94524fce48e5e2149e029df370b60dba5db94b9d" HandleID="k8s-pod-network.8fb341b61cea6a64c4232aeb94524fce48e5e2149e029df370b60dba5db94b9d" Workload="ci--4515.1.0--n--4ecc98c3fd-k8s-goldmane--666569f655--fmmn9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00039c8f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515.1.0-n-4ecc98c3fd", "pod":"goldmane-666569f655-fmmn9", "timestamp":"2026-01-15 00:32:18.542975986 +0000 UTC"}, Hostname:"ci-4515.1.0-n-4ecc98c3fd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 00:32:18.766866 containerd[1616]: 2026-01-15 00:32:18.545 [INFO][4529] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 15 00:32:18.766866 containerd[1616]: 2026-01-15 00:32:18.545 [INFO][4529] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 15 00:32:18.766866 containerd[1616]: 2026-01-15 00:32:18.545 [INFO][4529] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-n-4ecc98c3fd' Jan 15 00:32:18.766866 containerd[1616]: 2026-01-15 00:32:18.567 [INFO][4529] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8fb341b61cea6a64c4232aeb94524fce48e5e2149e029df370b60dba5db94b9d" host="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:32:18.766866 containerd[1616]: 2026-01-15 00:32:18.593 [INFO][4529] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:32:18.766866 containerd[1616]: 2026-01-15 00:32:18.615 [INFO][4529] ipam/ipam.go 511: Trying affinity for 192.168.6.0/26 host="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:32:18.766866 containerd[1616]: 2026-01-15 00:32:18.626 [INFO][4529] ipam/ipam.go 158: Attempting to load block cidr=192.168.6.0/26 host="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:32:18.766866 containerd[1616]: 2026-01-15 00:32:18.647 [INFO][4529] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.6.0/26 host="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:32:18.766866 containerd[1616]: 2026-01-15 00:32:18.648 [INFO][4529] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.6.0/26 handle="k8s-pod-network.8fb341b61cea6a64c4232aeb94524fce48e5e2149e029df370b60dba5db94b9d" host="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:32:18.766866 containerd[1616]: 2026-01-15 00:32:18.656 [INFO][4529] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8fb341b61cea6a64c4232aeb94524fce48e5e2149e029df370b60dba5db94b9d Jan 15 00:32:18.766866 containerd[1616]: 2026-01-15 00:32:18.664 [INFO][4529] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.6.0/26 handle="k8s-pod-network.8fb341b61cea6a64c4232aeb94524fce48e5e2149e029df370b60dba5db94b9d" host="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:32:18.766866 containerd[1616]: 2026-01-15 00:32:18.683 [INFO][4529] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.6.7/26] block=192.168.6.0/26 handle="k8s-pod-network.8fb341b61cea6a64c4232aeb94524fce48e5e2149e029df370b60dba5db94b9d" host="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:32:18.766866 containerd[1616]: 2026-01-15 00:32:18.685 [INFO][4529] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.6.7/26] handle="k8s-pod-network.8fb341b61cea6a64c4232aeb94524fce48e5e2149e029df370b60dba5db94b9d" host="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:32:18.766866 containerd[1616]: 2026-01-15 00:32:18.685 [INFO][4529] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 15 00:32:18.766866 containerd[1616]: 2026-01-15 00:32:18.685 [INFO][4529] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.6.7/26] IPv6=[] ContainerID="8fb341b61cea6a64c4232aeb94524fce48e5e2149e029df370b60dba5db94b9d" HandleID="k8s-pod-network.8fb341b61cea6a64c4232aeb94524fce48e5e2149e029df370b60dba5db94b9d" Workload="ci--4515.1.0--n--4ecc98c3fd-k8s-goldmane--666569f655--fmmn9-eth0" Jan 15 00:32:18.767949 containerd[1616]: 2026-01-15 00:32:18.692 [INFO][4482] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8fb341b61cea6a64c4232aeb94524fce48e5e2149e029df370b60dba5db94b9d" Namespace="calico-system" Pod="goldmane-666569f655-fmmn9" WorkloadEndpoint="ci--4515.1.0--n--4ecc98c3fd-k8s-goldmane--666569f655--fmmn9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--n--4ecc98c3fd-k8s-goldmane--666569f655--fmmn9-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"a6d2aaa6-9d35-4f8a-99b8-b75c10539cd4", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 0, 31, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-n-4ecc98c3fd", ContainerID:"", Pod:"goldmane-666569f655-fmmn9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.6.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali23ca391d7f5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 00:32:18.767949 containerd[1616]: 2026-01-15 00:32:18.692 [INFO][4482] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.6.7/32] ContainerID="8fb341b61cea6a64c4232aeb94524fce48e5e2149e029df370b60dba5db94b9d" Namespace="calico-system" Pod="goldmane-666569f655-fmmn9" WorkloadEndpoint="ci--4515.1.0--n--4ecc98c3fd-k8s-goldmane--666569f655--fmmn9-eth0" Jan 15 00:32:18.767949 containerd[1616]: 2026-01-15 00:32:18.692 [INFO][4482] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali23ca391d7f5 ContainerID="8fb341b61cea6a64c4232aeb94524fce48e5e2149e029df370b60dba5db94b9d" Namespace="calico-system" Pod="goldmane-666569f655-fmmn9" WorkloadEndpoint="ci--4515.1.0--n--4ecc98c3fd-k8s-goldmane--666569f655--fmmn9-eth0" Jan 15 00:32:18.767949 containerd[1616]: 2026-01-15 00:32:18.704 [INFO][4482] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8fb341b61cea6a64c4232aeb94524fce48e5e2149e029df370b60dba5db94b9d" Namespace="calico-system" Pod="goldmane-666569f655-fmmn9" WorkloadEndpoint="ci--4515.1.0--n--4ecc98c3fd-k8s-goldmane--666569f655--fmmn9-eth0" Jan 15 00:32:18.767949 containerd[1616]: 2026-01-15 00:32:18.706 [INFO][4482] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8fb341b61cea6a64c4232aeb94524fce48e5e2149e029df370b60dba5db94b9d" Namespace="calico-system" Pod="goldmane-666569f655-fmmn9" WorkloadEndpoint="ci--4515.1.0--n--4ecc98c3fd-k8s-goldmane--666569f655--fmmn9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--n--4ecc98c3fd-k8s-goldmane--666569f655--fmmn9-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"a6d2aaa6-9d35-4f8a-99b8-b75c10539cd4", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 0, 31, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-n-4ecc98c3fd", ContainerID:"8fb341b61cea6a64c4232aeb94524fce48e5e2149e029df370b60dba5db94b9d", Pod:"goldmane-666569f655-fmmn9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.6.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali23ca391d7f5", MAC:"72:c8:08:db:1f:c3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 00:32:18.767949 containerd[1616]: 2026-01-15 00:32:18.728 [INFO][4482] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8fb341b61cea6a64c4232aeb94524fce48e5e2149e029df370b60dba5db94b9d" Namespace="calico-system" Pod="goldmane-666569f655-fmmn9" WorkloadEndpoint="ci--4515.1.0--n--4ecc98c3fd-k8s-goldmane--666569f655--fmmn9-eth0" Jan 15 00:32:18.804988 containerd[1616]: time="2026-01-15T00:32:18.804940137Z" level=info msg="StartContainer for \"215f7b93f7e6805232d39a618cc5154f805d9a945124ad4f173c7f49a79e9eb5\" returns successfully" Jan 15 00:32:18.817400 systemd-networkd[1518]: calicd11e4bda5c: Gained IPv6LL Jan 15 00:32:18.826451 containerd[1616]: time="2026-01-15T00:32:18.826263387Z" level=info msg="connecting to shim 8fb341b61cea6a64c4232aeb94524fce48e5e2149e029df370b60dba5db94b9d" address="unix:///run/containerd/s/331b24830fcae2f7642838881ef0b3ba811feb3b03140a92f4b9e88bd061c383" namespace=k8s.io protocol=ttrpc version=3 Jan 15 00:32:18.903334 systemd[1]: Started cri-containerd-8fb341b61cea6a64c4232aeb94524fce48e5e2149e029df370b60dba5db94b9d.scope - libcontainer container 8fb341b61cea6a64c4232aeb94524fce48e5e2149e029df370b60dba5db94b9d. Jan 15 00:32:18.990255 containerd[1616]: time="2026-01-15T00:32:18.989875940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-586c796f68-7pr9q,Uid:3b2df0f5-3af7-40bf-8f6e-f5e8397900ad,Namespace:calico-apiserver,Attempt:0,}" Jan 15 00:32:19.048000 audit: BPF prog-id=246 op=LOAD Jan 15 00:32:19.049000 audit: BPF prog-id=247 op=LOAD Jan 15 00:32:19.049000 audit[4638]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=4625 pid=4638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:19.049000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866623334316236316365613661363463343233326165623934353234 Jan 15 00:32:19.051000 audit: BPF prog-id=247 op=UNLOAD Jan 15 00:32:19.051000 audit[4638]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4625 pid=4638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:19.051000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866623334316236316365613661363463343233326165623934353234 Jan 15 00:32:19.051000 audit: BPF prog-id=248 op=LOAD Jan 15 00:32:19.051000 audit[4638]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=4625 pid=4638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:19.051000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866623334316236316365613661363463343233326165623934353234 Jan 15 00:32:19.052000 audit: BPF prog-id=249 op=LOAD Jan 15 00:32:19.052000 audit[4638]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=4625 pid=4638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:19.052000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866623334316236316365613661363463343233326165623934353234 Jan 15 00:32:19.052000 audit: BPF prog-id=249 op=UNLOAD Jan 15 00:32:19.052000 audit[4638]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4625 pid=4638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:19.052000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866623334316236316365613661363463343233326165623934353234 Jan 15 00:32:19.053000 audit: BPF prog-id=248 op=UNLOAD Jan 15 00:32:19.053000 audit[4638]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4625 pid=4638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:19.053000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866623334316236316365613661363463343233326165623934353234 Jan 15 00:32:19.054000 audit: BPF prog-id=250 op=LOAD Jan 15 00:32:19.054000 audit[4638]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=4625 pid=4638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:19.054000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866623334316236316365613661363463343233326165623934353234 Jan 15 00:32:19.074383 systemd-networkd[1518]: cali60c829703c8: Gained IPv6LL Jan 15 00:32:19.137301 systemd-networkd[1518]: calid38bf3ec5df: Gained IPv6LL Jan 15 00:32:19.141000 audit[4675]: NETFILTER_CFG table=filter:134 family=2 entries=60 op=nft_register_chain pid=4675 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 15 00:32:19.141000 audit[4675]: SYSCALL arch=c000003e syscall=46 success=yes exit=29916 a0=3 a1=7fff6c40eef0 a2=0 a3=7fff6c40eedc items=0 ppid=3907 pid=4675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:19.141000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 15 00:32:19.234081 containerd[1616]: time="2026-01-15T00:32:19.233582014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-fmmn9,Uid:a6d2aaa6-9d35-4f8a-99b8-b75c10539cd4,Namespace:calico-system,Attempt:0,} returns sandbox id \"8fb341b61cea6a64c4232aeb94524fce48e5e2149e029df370b60dba5db94b9d\"" Jan 15 00:32:19.241756 containerd[1616]: time="2026-01-15T00:32:19.241533432Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 15 00:32:19.329612 systemd-networkd[1518]: cali67df4239d51: Link UP Jan 15 00:32:19.329834 systemd-networkd[1518]: cali67df4239d51: Gained carrier Jan 15 00:32:19.363962 containerd[1616]: 2026-01-15 00:32:19.112 [INFO][4660] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--n--4ecc98c3fd-k8s-calico--apiserver--586c796f68--7pr9q-eth0 calico-apiserver-586c796f68- calico-apiserver 3b2df0f5-3af7-40bf-8f6e-f5e8397900ad 891 0 2026-01-15 00:31:47 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:586c796f68 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4515.1.0-n-4ecc98c3fd calico-apiserver-586c796f68-7pr9q eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali67df4239d51 [] [] }} ContainerID="329fd0f9fde32f8088bbba409979e9d92b9842ac06a0d484808c6d32d69e6c7f" Namespace="calico-apiserver" Pod="calico-apiserver-586c796f68-7pr9q" WorkloadEndpoint="ci--4515.1.0--n--4ecc98c3fd-k8s-calico--apiserver--586c796f68--7pr9q-" Jan 15 00:32:19.363962 containerd[1616]: 2026-01-15 00:32:19.113 [INFO][4660] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="329fd0f9fde32f8088bbba409979e9d92b9842ac06a0d484808c6d32d69e6c7f" Namespace="calico-apiserver" Pod="calico-apiserver-586c796f68-7pr9q" WorkloadEndpoint="ci--4515.1.0--n--4ecc98c3fd-k8s-calico--apiserver--586c796f68--7pr9q-eth0" Jan 15 00:32:19.363962 containerd[1616]: 2026-01-15 00:32:19.211 [INFO][4679] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="329fd0f9fde32f8088bbba409979e9d92b9842ac06a0d484808c6d32d69e6c7f" HandleID="k8s-pod-network.329fd0f9fde32f8088bbba409979e9d92b9842ac06a0d484808c6d32d69e6c7f" Workload="ci--4515.1.0--n--4ecc98c3fd-k8s-calico--apiserver--586c796f68--7pr9q-eth0" Jan 15 00:32:19.363962 containerd[1616]: 2026-01-15 00:32:19.212 [INFO][4679] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="329fd0f9fde32f8088bbba409979e9d92b9842ac06a0d484808c6d32d69e6c7f" HandleID="k8s-pod-network.329fd0f9fde32f8088bbba409979e9d92b9842ac06a0d484808c6d32d69e6c7f" Workload="ci--4515.1.0--n--4ecc98c3fd-k8s-calico--apiserver--586c796f68--7pr9q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00037d6b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4515.1.0-n-4ecc98c3fd", "pod":"calico-apiserver-586c796f68-7pr9q", "timestamp":"2026-01-15 00:32:19.211126365 +0000 UTC"}, Hostname:"ci-4515.1.0-n-4ecc98c3fd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 00:32:19.363962 containerd[1616]: 2026-01-15 00:32:19.212 [INFO][4679] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 15 00:32:19.363962 containerd[1616]: 2026-01-15 00:32:19.212 [INFO][4679] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 15 00:32:19.363962 containerd[1616]: 2026-01-15 00:32:19.212 [INFO][4679] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-n-4ecc98c3fd' Jan 15 00:32:19.363962 containerd[1616]: 2026-01-15 00:32:19.226 [INFO][4679] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.329fd0f9fde32f8088bbba409979e9d92b9842ac06a0d484808c6d32d69e6c7f" host="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:32:19.363962 containerd[1616]: 2026-01-15 00:32:19.249 [INFO][4679] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:32:19.363962 containerd[1616]: 2026-01-15 00:32:19.259 [INFO][4679] ipam/ipam.go 511: Trying affinity for 192.168.6.0/26 host="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:32:19.363962 containerd[1616]: 2026-01-15 00:32:19.268 [INFO][4679] ipam/ipam.go 158: Attempting to load block cidr=192.168.6.0/26 host="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:32:19.363962 containerd[1616]: 2026-01-15 00:32:19.274 [INFO][4679] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.6.0/26 host="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:32:19.363962 containerd[1616]: 2026-01-15 00:32:19.274 [INFO][4679] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.6.0/26 handle="k8s-pod-network.329fd0f9fde32f8088bbba409979e9d92b9842ac06a0d484808c6d32d69e6c7f" host="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:32:19.363962 containerd[1616]: 2026-01-15 00:32:19.277 [INFO][4679] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.329fd0f9fde32f8088bbba409979e9d92b9842ac06a0d484808c6d32d69e6c7f Jan 15 00:32:19.363962 containerd[1616]: 2026-01-15 00:32:19.287 [INFO][4679] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.6.0/26 handle="k8s-pod-network.329fd0f9fde32f8088bbba409979e9d92b9842ac06a0d484808c6d32d69e6c7f" host="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:32:19.363962 containerd[1616]: 2026-01-15 00:32:19.316 [INFO][4679] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.6.8/26] block=192.168.6.0/26 handle="k8s-pod-network.329fd0f9fde32f8088bbba409979e9d92b9842ac06a0d484808c6d32d69e6c7f" host="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:32:19.363962 containerd[1616]: 2026-01-15 00:32:19.317 [INFO][4679] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.6.8/26] handle="k8s-pod-network.329fd0f9fde32f8088bbba409979e9d92b9842ac06a0d484808c6d32d69e6c7f" host="ci-4515.1.0-n-4ecc98c3fd" Jan 15 00:32:19.363962 containerd[1616]: 2026-01-15 00:32:19.317 [INFO][4679] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 15 00:32:19.363962 containerd[1616]: 2026-01-15 00:32:19.317 [INFO][4679] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.6.8/26] IPv6=[] ContainerID="329fd0f9fde32f8088bbba409979e9d92b9842ac06a0d484808c6d32d69e6c7f" HandleID="k8s-pod-network.329fd0f9fde32f8088bbba409979e9d92b9842ac06a0d484808c6d32d69e6c7f" Workload="ci--4515.1.0--n--4ecc98c3fd-k8s-calico--apiserver--586c796f68--7pr9q-eth0" Jan 15 00:32:19.370899 containerd[1616]: 2026-01-15 00:32:19.324 [INFO][4660] cni-plugin/k8s.go 418: Populated endpoint ContainerID="329fd0f9fde32f8088bbba409979e9d92b9842ac06a0d484808c6d32d69e6c7f" Namespace="calico-apiserver" Pod="calico-apiserver-586c796f68-7pr9q" WorkloadEndpoint="ci--4515.1.0--n--4ecc98c3fd-k8s-calico--apiserver--586c796f68--7pr9q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--n--4ecc98c3fd-k8s-calico--apiserver--586c796f68--7pr9q-eth0", GenerateName:"calico-apiserver-586c796f68-", Namespace:"calico-apiserver", SelfLink:"", UID:"3b2df0f5-3af7-40bf-8f6e-f5e8397900ad", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 0, 31, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"586c796f68", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-n-4ecc98c3fd", ContainerID:"", Pod:"calico-apiserver-586c796f68-7pr9q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.6.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali67df4239d51", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 00:32:19.370899 containerd[1616]: 2026-01-15 00:32:19.324 [INFO][4660] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.6.8/32] ContainerID="329fd0f9fde32f8088bbba409979e9d92b9842ac06a0d484808c6d32d69e6c7f" Namespace="calico-apiserver" Pod="calico-apiserver-586c796f68-7pr9q" WorkloadEndpoint="ci--4515.1.0--n--4ecc98c3fd-k8s-calico--apiserver--586c796f68--7pr9q-eth0" Jan 15 00:32:19.370899 containerd[1616]: 2026-01-15 00:32:19.324 [INFO][4660] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali67df4239d51 ContainerID="329fd0f9fde32f8088bbba409979e9d92b9842ac06a0d484808c6d32d69e6c7f" Namespace="calico-apiserver" Pod="calico-apiserver-586c796f68-7pr9q" WorkloadEndpoint="ci--4515.1.0--n--4ecc98c3fd-k8s-calico--apiserver--586c796f68--7pr9q-eth0" Jan 15 00:32:19.370899 containerd[1616]: 2026-01-15 00:32:19.328 [INFO][4660] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="329fd0f9fde32f8088bbba409979e9d92b9842ac06a0d484808c6d32d69e6c7f" Namespace="calico-apiserver" Pod="calico-apiserver-586c796f68-7pr9q" WorkloadEndpoint="ci--4515.1.0--n--4ecc98c3fd-k8s-calico--apiserver--586c796f68--7pr9q-eth0" Jan 15 00:32:19.370899 containerd[1616]: 2026-01-15 00:32:19.329 [INFO][4660] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="329fd0f9fde32f8088bbba409979e9d92b9842ac06a0d484808c6d32d69e6c7f" Namespace="calico-apiserver" Pod="calico-apiserver-586c796f68-7pr9q" WorkloadEndpoint="ci--4515.1.0--n--4ecc98c3fd-k8s-calico--apiserver--586c796f68--7pr9q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--n--4ecc98c3fd-k8s-calico--apiserver--586c796f68--7pr9q-eth0", GenerateName:"calico-apiserver-586c796f68-", Namespace:"calico-apiserver", SelfLink:"", UID:"3b2df0f5-3af7-40bf-8f6e-f5e8397900ad", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 0, 31, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"586c796f68", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-n-4ecc98c3fd", ContainerID:"329fd0f9fde32f8088bbba409979e9d92b9842ac06a0d484808c6d32d69e6c7f", Pod:"calico-apiserver-586c796f68-7pr9q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.6.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali67df4239d51", MAC:"0e:e4:07:ad:60:49", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 00:32:19.370899 containerd[1616]: 2026-01-15 00:32:19.351 [INFO][4660] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="329fd0f9fde32f8088bbba409979e9d92b9842ac06a0d484808c6d32d69e6c7f" Namespace="calico-apiserver" Pod="calico-apiserver-586c796f68-7pr9q" WorkloadEndpoint="ci--4515.1.0--n--4ecc98c3fd-k8s-calico--apiserver--586c796f68--7pr9q-eth0" Jan 15 00:32:19.431884 containerd[1616]: time="2026-01-15T00:32:19.431825096Z" level=info msg="connecting to shim 329fd0f9fde32f8088bbba409979e9d92b9842ac06a0d484808c6d32d69e6c7f" address="unix:///run/containerd/s/8af60321ab70d1bc9cbcc7375010791a9e9292897bacb7046fcbea6a7b3626a9" namespace=k8s.io protocol=ttrpc version=3 Jan 15 00:32:19.501449 systemd[1]: Started cri-containerd-329fd0f9fde32f8088bbba409979e9d92b9842ac06a0d484808c6d32d69e6c7f.scope - libcontainer container 329fd0f9fde32f8088bbba409979e9d92b9842ac06a0d484808c6d32d69e6c7f. Jan 15 00:32:19.520886 kubelet[2785]: E0115 00:32:19.520847 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 15 00:32:19.530956 kubelet[2785]: E0115 00:32:19.530762 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 15 00:32:19.544496 kubelet[2785]: E0115 00:32:19.544442 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d4f97847b-lrvs5" podUID="5adbdfdd-96a2-41eb-8663-7460bd3865b9" Jan 15 00:32:19.544932 kubelet[2785]: E0115 00:32:19.544861 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-586c796f68-gf7fx" podUID="b432d05d-ed71-4758-b9af-7738bf34afb7" Jan 15 00:32:19.557000 audit[4741]: NETFILTER_CFG table=filter:135 family=2 entries=63 op=nft_register_chain pid=4741 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 15 00:32:19.557000 audit[4741]: SYSCALL arch=c000003e syscall=46 success=yes exit=30664 a0=3 a1=7fff598a3870 a2=0 a3=7fff598a385c items=0 ppid=3907 pid=4741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:19.557000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 15 00:32:19.589647 containerd[1616]: time="2026-01-15T00:32:19.589533682Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:32:19.598251 kubelet[2785]: I0115 00:32:19.597822 2785 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-cxd6l" podStartSLOduration=45.570202527 podStartE2EDuration="45.570202527s" podCreationTimestamp="2026-01-15 00:31:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-15 00:32:19.567315059 +0000 UTC m=+50.783578325" watchObservedRunningTime="2026-01-15 00:32:19.570202527 +0000 UTC m=+50.786465794" Jan 15 00:32:19.625399 containerd[1616]: time="2026-01-15T00:32:19.622539537Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 15 00:32:19.625399 containerd[1616]: time="2026-01-15T00:32:19.622972786Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 15 00:32:19.640671 kubelet[2785]: E0115 00:32:19.640579 2785 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 15 00:32:19.640671 kubelet[2785]: E0115 00:32:19.640633 2785 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 15 00:32:19.641143 kubelet[2785]: E0115 00:32:19.640806 2785 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mpqkw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-fmmn9_calico-system(a6d2aaa6-9d35-4f8a-99b8-b75c10539cd4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 15 00:32:19.642619 kubelet[2785]: E0115 00:32:19.642551 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-fmmn9" podUID="a6d2aaa6-9d35-4f8a-99b8-b75c10539cd4" Jan 15 00:32:19.662000 audit[4743]: NETFILTER_CFG table=filter:136 family=2 entries=20 op=nft_register_rule pid=4743 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:32:19.662000 audit[4743]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff59d3a6e0 a2=0 a3=7fff59d3a6cc items=0 ppid=2937 pid=4743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:19.662000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:32:19.669000 audit[4743]: NETFILTER_CFG table=nat:137 family=2 entries=14 op=nft_register_rule pid=4743 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:32:19.669000 audit[4743]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fff59d3a6e0 a2=0 a3=0 items=0 ppid=2937 pid=4743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:19.669000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:32:19.705000 audit: BPF prog-id=251 op=LOAD Jan 15 00:32:19.708000 audit: BPF prog-id=252 op=LOAD Jan 15 00:32:19.708000 audit[4721]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000230238 a2=98 a3=0 items=0 ppid=4708 pid=4721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:19.708000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332396664306639666465333266383038386262626134303939373965 Jan 15 00:32:19.708000 audit: BPF prog-id=252 op=UNLOAD Jan 15 00:32:19.708000 audit[4721]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4708 pid=4721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:19.708000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332396664306639666465333266383038386262626134303939373965 Jan 15 00:32:19.708000 audit: BPF prog-id=253 op=LOAD Jan 15 00:32:19.708000 audit[4721]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000230488 a2=98 a3=0 items=0 ppid=4708 pid=4721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:19.708000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332396664306639666465333266383038386262626134303939373965 Jan 15 00:32:19.709000 audit: BPF prog-id=254 op=LOAD Jan 15 00:32:19.709000 audit[4721]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000230218 a2=98 a3=0 items=0 ppid=4708 pid=4721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:19.709000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332396664306639666465333266383038386262626134303939373965 Jan 15 00:32:19.709000 audit: BPF prog-id=254 op=UNLOAD Jan 15 00:32:19.709000 audit[4721]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4708 pid=4721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:19.709000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332396664306639666465333266383038386262626134303939373965 Jan 15 00:32:19.709000 audit: BPF prog-id=253 op=UNLOAD Jan 15 00:32:19.709000 audit[4721]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4708 pid=4721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:19.709000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332396664306639666465333266383038386262626134303939373965 Jan 15 00:32:19.709000 audit: BPF prog-id=255 op=LOAD Jan 15 00:32:19.709000 audit[4721]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0002306e8 a2=98 a3=0 items=0 ppid=4708 pid=4721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:19.709000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332396664306639666465333266383038386262626134303939373965 Jan 15 00:32:19.755002 kubelet[2785]: I0115 00:32:19.754667 2785 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-n8h7v" podStartSLOduration=45.754588755 podStartE2EDuration="45.754588755s" podCreationTimestamp="2026-01-15 00:31:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-15 00:32:19.753206719 +0000 UTC m=+50.969469986" watchObservedRunningTime="2026-01-15 00:32:19.754588755 +0000 UTC m=+50.970852022" Jan 15 00:32:19.763000 audit[4745]: NETFILTER_CFG table=filter:138 family=2 entries=17 op=nft_register_rule pid=4745 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:32:19.763000 audit[4745]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff6d8a7ab0 a2=0 a3=7fff6d8a7a9c items=0 ppid=2937 pid=4745 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:19.763000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:32:19.785000 audit[4745]: NETFILTER_CFG table=nat:139 family=2 entries=35 op=nft_register_chain pid=4745 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:32:19.785000 audit[4745]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7fff6d8a7ab0 a2=0 a3=7fff6d8a7a9c items=0 ppid=2937 pid=4745 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:19.785000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:32:19.842226 systemd-networkd[1518]: cali3151784e5bc: Gained IPv6LL Jan 15 00:32:19.865050 containerd[1616]: time="2026-01-15T00:32:19.864785300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-586c796f68-7pr9q,Uid:3b2df0f5-3af7-40bf-8f6e-f5e8397900ad,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"329fd0f9fde32f8088bbba409979e9d92b9842ac06a0d484808c6d32d69e6c7f\"" Jan 15 00:32:19.869601 containerd[1616]: time="2026-01-15T00:32:19.869545830Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 15 00:32:20.289705 systemd-networkd[1518]: cali23ca391d7f5: Gained IPv6LL Jan 15 00:32:20.321921 containerd[1616]: time="2026-01-15T00:32:20.321626826Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:32:20.336126 containerd[1616]: time="2026-01-15T00:32:20.335746644Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 15 00:32:20.336126 containerd[1616]: time="2026-01-15T00:32:20.335925384Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 15 00:32:20.336353 kubelet[2785]: E0115 00:32:20.336294 2785 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 00:32:20.336442 kubelet[2785]: E0115 00:32:20.336383 2785 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 00:32:20.336597 kubelet[2785]: E0115 00:32:20.336541 2785 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8d7lk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-586c796f68-7pr9q_calico-apiserver(3b2df0f5-3af7-40bf-8f6e-f5e8397900ad): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 15 00:32:20.338167 kubelet[2785]: E0115 00:32:20.338099 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-586c796f68-7pr9q" podUID="3b2df0f5-3af7-40bf-8f6e-f5e8397900ad" Jan 15 00:32:20.550992 kubelet[2785]: E0115 00:32:20.548508 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 15 00:32:20.550992 kubelet[2785]: E0115 00:32:20.550569 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-586c796f68-7pr9q" podUID="3b2df0f5-3af7-40bf-8f6e-f5e8397900ad" Jan 15 00:32:20.551550 kubelet[2785]: E0115 00:32:20.551377 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-fmmn9" podUID="a6d2aaa6-9d35-4f8a-99b8-b75c10539cd4" Jan 15 00:32:20.553093 kubelet[2785]: E0115 00:32:20.552157 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 15 00:32:20.614000 audit[4752]: NETFILTER_CFG table=filter:140 family=2 entries=14 op=nft_register_rule pid=4752 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:32:20.614000 audit[4752]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe48b0e0b0 a2=0 a3=7ffe48b0e09c items=0 ppid=2937 pid=4752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:20.614000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:32:20.632000 audit[4752]: NETFILTER_CFG table=nat:141 family=2 entries=56 op=nft_register_chain pid=4752 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:32:20.632000 audit[4752]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffe48b0e0b0 a2=0 a3=7ffe48b0e09c items=0 ppid=2937 pid=4752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:20.632000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:32:21.185371 systemd-networkd[1518]: cali67df4239d51: Gained IPv6LL Jan 15 00:32:21.550937 kubelet[2785]: E0115 00:32:21.550872 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 15 00:32:21.551747 kubelet[2785]: E0115 00:32:21.551601 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 15 00:32:21.554045 kubelet[2785]: E0115 00:32:21.553945 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-586c796f68-7pr9q" podUID="3b2df0f5-3af7-40bf-8f6e-f5e8397900ad" Jan 15 00:32:21.658000 audit[4763]: NETFILTER_CFG table=filter:142 family=2 entries=14 op=nft_register_rule pid=4763 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:32:21.658000 audit[4763]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffff4b9c820 a2=0 a3=7ffff4b9c80c items=0 ppid=2937 pid=4763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:21.658000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:32:21.667000 audit[4763]: NETFILTER_CFG table=nat:143 family=2 entries=20 op=nft_register_rule pid=4763 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:32:21.667000 audit[4763]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffff4b9c820 a2=0 a3=7ffff4b9c80c items=0 ppid=2937 pid=4763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:32:21.667000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:32:28.988628 containerd[1616]: time="2026-01-15T00:32:28.988566555Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 15 00:32:29.316305 containerd[1616]: time="2026-01-15T00:32:29.316194601Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:32:29.317753 containerd[1616]: time="2026-01-15T00:32:29.317661890Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 15 00:32:29.317931 containerd[1616]: time="2026-01-15T00:32:29.317720329Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 15 00:32:29.318148 kubelet[2785]: E0115 00:32:29.318111 2785 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 15 00:32:29.318837 kubelet[2785]: E0115 00:32:29.318553 2785 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 15 00:32:29.318837 kubelet[2785]: E0115 00:32:29.318783 2785 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-brgll,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rjlcz_calico-system(14ced92d-cf89-41f0-99bf-edc9c92a737b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 15 00:32:29.321695 containerd[1616]: time="2026-01-15T00:32:29.321640480Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 15 00:32:29.664987 containerd[1616]: time="2026-01-15T00:32:29.661390923Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:32:29.668040 containerd[1616]: time="2026-01-15T00:32:29.667899823Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 15 00:32:29.668040 containerd[1616]: time="2026-01-15T00:32:29.667974937Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 15 00:32:29.669073 kubelet[2785]: E0115 00:32:29.668395 2785 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 15 00:32:29.669073 kubelet[2785]: E0115 00:32:29.668451 2785 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 15 00:32:29.669073 kubelet[2785]: E0115 00:32:29.668580 2785 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-brgll,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rjlcz_calico-system(14ced92d-cf89-41f0-99bf-edc9c92a737b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 15 00:32:29.670177 kubelet[2785]: E0115 00:32:29.670117 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rjlcz" podUID="14ced92d-cf89-41f0-99bf-edc9c92a737b" Jan 15 00:32:29.989759 containerd[1616]: time="2026-01-15T00:32:29.989087668Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 15 00:32:30.317851 containerd[1616]: time="2026-01-15T00:32:30.317591143Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:32:30.319071 containerd[1616]: time="2026-01-15T00:32:30.318906012Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 15 00:32:30.319280 containerd[1616]: time="2026-01-15T00:32:30.318951310Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 15 00:32:30.319709 kubelet[2785]: E0115 00:32:30.319654 2785 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 15 00:32:30.320631 kubelet[2785]: E0115 00:32:30.320331 2785 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 15 00:32:30.320631 kubelet[2785]: E0115 00:32:30.320556 2785 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ts2lb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7d4f97847b-lrvs5_calico-system(5adbdfdd-96a2-41eb-8663-7460bd3865b9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 15 00:32:30.322741 kubelet[2785]: E0115 00:32:30.321821 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d4f97847b-lrvs5" podUID="5adbdfdd-96a2-41eb-8663-7460bd3865b9" Jan 15 00:32:30.990045 containerd[1616]: time="2026-01-15T00:32:30.989995048Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 15 00:32:31.313969 containerd[1616]: time="2026-01-15T00:32:31.313720158Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:32:31.314914 containerd[1616]: time="2026-01-15T00:32:31.314795693Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 15 00:32:31.314914 containerd[1616]: time="2026-01-15T00:32:31.314875547Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 15 00:32:31.315356 kubelet[2785]: E0115 00:32:31.315289 2785 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 15 00:32:31.315523 kubelet[2785]: E0115 00:32:31.315464 2785 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 15 00:32:31.315827 kubelet[2785]: E0115 00:32:31.315773 2785 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:1c85d4f6dcf249e199926edb662227fb,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xp2m5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7fdcf9f989-nm8zk_calico-system(a4b16ce3-dca7-42f9-90d7-10ddcc6423d9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 15 00:32:31.318806 containerd[1616]: time="2026-01-15T00:32:31.318741374Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 15 00:32:31.651892 containerd[1616]: time="2026-01-15T00:32:31.651721744Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:32:31.652528 containerd[1616]: time="2026-01-15T00:32:31.652449882Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 15 00:32:31.652816 containerd[1616]: time="2026-01-15T00:32:31.652731913Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 15 00:32:31.652966 kubelet[2785]: E0115 00:32:31.652924 2785 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 15 00:32:31.653443 kubelet[2785]: E0115 00:32:31.652985 2785 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 15 00:32:31.653443 kubelet[2785]: E0115 00:32:31.653385 2785 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xp2m5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7fdcf9f989-nm8zk_calico-system(a4b16ce3-dca7-42f9-90d7-10ddcc6423d9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 15 00:32:31.654960 kubelet[2785]: E0115 00:32:31.654851 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7fdcf9f989-nm8zk" podUID="a4b16ce3-dca7-42f9-90d7-10ddcc6423d9" Jan 15 00:32:31.987551 containerd[1616]: time="2026-01-15T00:32:31.987071733Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 15 00:32:32.288724 containerd[1616]: time="2026-01-15T00:32:32.288633545Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:32:32.289756 containerd[1616]: time="2026-01-15T00:32:32.289687365Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 15 00:32:32.289871 containerd[1616]: time="2026-01-15T00:32:32.289819224Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 15 00:32:32.290177 kubelet[2785]: E0115 00:32:32.290065 2785 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 15 00:32:32.290177 kubelet[2785]: E0115 00:32:32.290142 2785 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 15 00:32:32.290604 kubelet[2785]: E0115 00:32:32.290529 2785 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mpqkw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-fmmn9_calico-system(a6d2aaa6-9d35-4f8a-99b8-b75c10539cd4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 15 00:32:32.293452 kubelet[2785]: E0115 00:32:32.293381 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-fmmn9" podUID="a6d2aaa6-9d35-4f8a-99b8-b75c10539cd4" Jan 15 00:32:32.988569 containerd[1616]: time="2026-01-15T00:32:32.988413460Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 15 00:32:33.316423 containerd[1616]: time="2026-01-15T00:32:33.316264908Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:32:33.317182 containerd[1616]: time="2026-01-15T00:32:33.317113854Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 15 00:32:33.318170 containerd[1616]: time="2026-01-15T00:32:33.317170026Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 15 00:32:33.318226 kubelet[2785]: E0115 00:32:33.317524 2785 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 00:32:33.318226 kubelet[2785]: E0115 00:32:33.317609 2785 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 00:32:33.318226 kubelet[2785]: E0115 00:32:33.317760 2785 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tl2m7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-586c796f68-gf7fx_calico-apiserver(b432d05d-ed71-4758-b9af-7738bf34afb7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 15 00:32:33.319253 kubelet[2785]: E0115 00:32:33.319187 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-586c796f68-gf7fx" podUID="b432d05d-ed71-4758-b9af-7738bf34afb7" Jan 15 00:32:35.988428 containerd[1616]: time="2026-01-15T00:32:35.988007330Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 15 00:32:36.303252 containerd[1616]: time="2026-01-15T00:32:36.303059419Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:32:36.304180 containerd[1616]: time="2026-01-15T00:32:36.304061624Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 15 00:32:36.304180 containerd[1616]: time="2026-01-15T00:32:36.304147059Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 15 00:32:36.304379 kubelet[2785]: E0115 00:32:36.304332 2785 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 00:32:36.305501 kubelet[2785]: E0115 00:32:36.304395 2785 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 00:32:36.305501 kubelet[2785]: E0115 00:32:36.304536 2785 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8d7lk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-586c796f68-7pr9q_calico-apiserver(3b2df0f5-3af7-40bf-8f6e-f5e8397900ad): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 15 00:32:36.306615 kubelet[2785]: E0115 00:32:36.306550 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-586c796f68-7pr9q" podUID="3b2df0f5-3af7-40bf-8f6e-f5e8397900ad" Jan 15 00:32:42.991768 kubelet[2785]: E0115 00:32:42.991567 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-fmmn9" podUID="a6d2aaa6-9d35-4f8a-99b8-b75c10539cd4" Jan 15 00:32:43.989751 kubelet[2785]: E0115 00:32:43.989582 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rjlcz" podUID="14ced92d-cf89-41f0-99bf-edc9c92a737b" Jan 15 00:32:44.557213 kubelet[2785]: E0115 00:32:44.557172 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 15 00:32:45.989634 kubelet[2785]: E0115 00:32:45.988516 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d4f97847b-lrvs5" podUID="5adbdfdd-96a2-41eb-8663-7460bd3865b9" Jan 15 00:32:45.991514 kubelet[2785]: E0115 00:32:45.990726 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7fdcf9f989-nm8zk" podUID="a4b16ce3-dca7-42f9-90d7-10ddcc6423d9" Jan 15 00:32:47.987212 kubelet[2785]: E0115 00:32:47.987134 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-586c796f68-gf7fx" podUID="b432d05d-ed71-4758-b9af-7738bf34afb7" Jan 15 00:32:48.989510 kubelet[2785]: E0115 00:32:48.989450 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 15 00:32:50.990405 kubelet[2785]: E0115 00:32:50.990338 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-586c796f68-7pr9q" podUID="3b2df0f5-3af7-40bf-8f6e-f5e8397900ad" Jan 15 00:32:54.990574 containerd[1616]: time="2026-01-15T00:32:54.990293703Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 15 00:32:55.307840 containerd[1616]: time="2026-01-15T00:32:55.307784156Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:32:55.309724 containerd[1616]: time="2026-01-15T00:32:55.309435554Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 15 00:32:55.309724 containerd[1616]: time="2026-01-15T00:32:55.309459049Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 15 00:32:55.311198 kubelet[2785]: E0115 00:32:55.310121 2785 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 15 00:32:55.311198 kubelet[2785]: E0115 00:32:55.310195 2785 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 15 00:32:55.311198 kubelet[2785]: E0115 00:32:55.310358 2785 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mpqkw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-fmmn9_calico-system(a6d2aaa6-9d35-4f8a-99b8-b75c10539cd4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 15 00:32:55.312154 kubelet[2785]: E0115 00:32:55.312107 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-fmmn9" podUID="a6d2aaa6-9d35-4f8a-99b8-b75c10539cd4" Jan 15 00:32:55.990775 kubelet[2785]: E0115 00:32:55.990726 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 15 00:32:56.004035 containerd[1616]: time="2026-01-15T00:32:56.003788501Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 15 00:32:56.323562 containerd[1616]: time="2026-01-15T00:32:56.323154043Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:32:56.324596 containerd[1616]: time="2026-01-15T00:32:56.324351041Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 15 00:32:56.324596 containerd[1616]: time="2026-01-15T00:32:56.324441494Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 15 00:32:56.325169 kubelet[2785]: E0115 00:32:56.325123 2785 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 15 00:32:56.325816 kubelet[2785]: E0115 00:32:56.325187 2785 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 15 00:32:56.325816 kubelet[2785]: E0115 00:32:56.325327 2785 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-brgll,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rjlcz_calico-system(14ced92d-cf89-41f0-99bf-edc9c92a737b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 15 00:32:56.329216 containerd[1616]: time="2026-01-15T00:32:56.329162228Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 15 00:32:56.666102 containerd[1616]: time="2026-01-15T00:32:56.665355147Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:32:56.666838 containerd[1616]: time="2026-01-15T00:32:56.666670819Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 15 00:32:56.666838 containerd[1616]: time="2026-01-15T00:32:56.666703361Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 15 00:32:56.667237 kubelet[2785]: E0115 00:32:56.667196 2785 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 15 00:32:56.667442 kubelet[2785]: E0115 00:32:56.667412 2785 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 15 00:32:56.668824 kubelet[2785]: E0115 00:32:56.668734 2785 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-brgll,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rjlcz_calico-system(14ced92d-cf89-41f0-99bf-edc9c92a737b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 15 00:32:56.670345 kubelet[2785]: E0115 00:32:56.670263 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rjlcz" podUID="14ced92d-cf89-41f0-99bf-edc9c92a737b" Jan 15 00:32:57.988327 containerd[1616]: time="2026-01-15T00:32:57.988275936Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 15 00:32:58.312636 containerd[1616]: time="2026-01-15T00:32:58.312565681Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:32:58.313618 containerd[1616]: time="2026-01-15T00:32:58.313431262Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 15 00:32:58.313618 containerd[1616]: time="2026-01-15T00:32:58.313486475Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 15 00:32:58.314687 kubelet[2785]: E0115 00:32:58.313982 2785 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 15 00:32:58.315931 kubelet[2785]: E0115 00:32:58.314754 2785 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 15 00:32:58.315931 kubelet[2785]: E0115 00:32:58.315207 2785 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ts2lb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7d4f97847b-lrvs5_calico-system(5adbdfdd-96a2-41eb-8663-7460bd3865b9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 15 00:32:58.316437 kubelet[2785]: E0115 00:32:58.316399 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d4f97847b-lrvs5" podUID="5adbdfdd-96a2-41eb-8663-7460bd3865b9" Jan 15 00:32:58.316793 containerd[1616]: time="2026-01-15T00:32:58.316745866Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 15 00:32:58.662134 containerd[1616]: time="2026-01-15T00:32:58.661790644Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:32:58.663085 containerd[1616]: time="2026-01-15T00:32:58.662591931Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 15 00:32:58.663085 containerd[1616]: time="2026-01-15T00:32:58.662710684Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 15 00:32:58.663191 kubelet[2785]: E0115 00:32:58.663094 2785 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 15 00:32:58.663191 kubelet[2785]: E0115 00:32:58.663167 2785 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 15 00:32:58.663651 kubelet[2785]: E0115 00:32:58.663317 2785 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:1c85d4f6dcf249e199926edb662227fb,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xp2m5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7fdcf9f989-nm8zk_calico-system(a4b16ce3-dca7-42f9-90d7-10ddcc6423d9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 15 00:32:58.666277 containerd[1616]: time="2026-01-15T00:32:58.665794539Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 15 00:32:58.974579 containerd[1616]: time="2026-01-15T00:32:58.974287299Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:32:58.976780 containerd[1616]: time="2026-01-15T00:32:58.976610079Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 15 00:32:58.976780 containerd[1616]: time="2026-01-15T00:32:58.976740287Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 15 00:32:58.977203 kubelet[2785]: E0115 00:32:58.977155 2785 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 15 00:32:58.977456 kubelet[2785]: E0115 00:32:58.977371 2785 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 15 00:32:58.979519 kubelet[2785]: E0115 00:32:58.979455 2785 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xp2m5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7fdcf9f989-nm8zk_calico-system(a4b16ce3-dca7-42f9-90d7-10ddcc6423d9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 15 00:32:58.980934 kubelet[2785]: E0115 00:32:58.980866 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7fdcf9f989-nm8zk" podUID="a4b16ce3-dca7-42f9-90d7-10ddcc6423d9" Jan 15 00:32:58.990968 kubelet[2785]: E0115 00:32:58.990855 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 15 00:32:59.993100 containerd[1616]: time="2026-01-15T00:32:59.991753416Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 15 00:33:00.335923 containerd[1616]: time="2026-01-15T00:33:00.334397194Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:33:00.336575 containerd[1616]: time="2026-01-15T00:33:00.336504211Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 15 00:33:00.336706 containerd[1616]: time="2026-01-15T00:33:00.336580692Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 15 00:33:00.336983 kubelet[2785]: E0115 00:33:00.336930 2785 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 00:33:00.338278 kubelet[2785]: E0115 00:33:00.337921 2785 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 00:33:00.338732 kubelet[2785]: E0115 00:33:00.338660 2785 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tl2m7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-586c796f68-gf7fx_calico-apiserver(b432d05d-ed71-4758-b9af-7738bf34afb7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 15 00:33:00.341040 kubelet[2785]: E0115 00:33:00.340935 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-586c796f68-gf7fx" podUID="b432d05d-ed71-4758-b9af-7738bf34afb7" Jan 15 00:33:05.990621 containerd[1616]: time="2026-01-15T00:33:05.990293988Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 15 00:33:06.322213 containerd[1616]: time="2026-01-15T00:33:06.322152046Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:33:06.323079 containerd[1616]: time="2026-01-15T00:33:06.322922129Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 15 00:33:06.323236 containerd[1616]: time="2026-01-15T00:33:06.322920985Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 15 00:33:06.323469 kubelet[2785]: E0115 00:33:06.323415 2785 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 00:33:06.323875 kubelet[2785]: E0115 00:33:06.323478 2785 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 00:33:06.323875 kubelet[2785]: E0115 00:33:06.323618 2785 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8d7lk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-586c796f68-7pr9q_calico-apiserver(3b2df0f5-3af7-40bf-8f6e-f5e8397900ad): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 15 00:33:06.325197 kubelet[2785]: E0115 00:33:06.325124 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-586c796f68-7pr9q" podUID="3b2df0f5-3af7-40bf-8f6e-f5e8397900ad" Jan 15 00:33:06.992265 kubelet[2785]: E0115 00:33:06.992145 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rjlcz" podUID="14ced92d-cf89-41f0-99bf-edc9c92a737b" Jan 15 00:33:07.987360 kubelet[2785]: E0115 00:33:07.987288 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 15 00:33:07.989571 kubelet[2785]: E0115 00:33:07.989473 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-fmmn9" podUID="a6d2aaa6-9d35-4f8a-99b8-b75c10539cd4" Jan 15 00:33:08.988890 kubelet[2785]: E0115 00:33:08.988825 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d4f97847b-lrvs5" podUID="5adbdfdd-96a2-41eb-8663-7460bd3865b9" Jan 15 00:33:11.991480 kubelet[2785]: E0115 00:33:11.991294 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7fdcf9f989-nm8zk" podUID="a4b16ce3-dca7-42f9-90d7-10ddcc6423d9" Jan 15 00:33:13.544527 systemd[1]: Started sshd@7-164.92.64.55:22-20.161.92.111:47584.service - OpenSSH per-connection server daemon (20.161.92.111:47584). Jan 15 00:33:13.549827 kernel: kauditd_printk_skb: 214 callbacks suppressed Jan 15 00:33:13.549925 kernel: audit: type=1130 audit(1768437193.546:750): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-164.92.64.55:22-20.161.92.111:47584 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:33:13.546000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-164.92.64.55:22-20.161.92.111:47584 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:33:13.978000 audit[4836]: USER_ACCT pid=4836 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:13.983692 sshd[4836]: Accepted publickey for core from 20.161.92.111 port 47584 ssh2: RSA SHA256:EI6zOIQQ/KS+Bep1WC3vAdrkfex1g89wTRiiOfwntxI Jan 15 00:33:13.985384 kernel: audit: type=1101 audit(1768437193.978:751): pid=4836 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:13.986000 audit[4836]: CRED_ACQ pid=4836 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:13.992065 kernel: audit: type=1103 audit(1768437193.986:752): pid=4836 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:13.992048 sshd-session[4836]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:33:13.993231 kubelet[2785]: E0115 00:33:13.992795 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-586c796f68-gf7fx" podUID="b432d05d-ed71-4758-b9af-7738bf34afb7" Jan 15 00:33:13.999404 kernel: audit: type=1006 audit(1768437193.986:753): pid=4836 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=8 res=1 Jan 15 00:33:13.986000 audit[4836]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe61ba2280 a2=3 a3=0 items=0 ppid=1 pid=4836 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:33:14.007532 kernel: audit: type=1300 audit(1768437193.986:753): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe61ba2280 a2=3 a3=0 items=0 ppid=1 pid=4836 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:33:14.013097 systemd-logind[1592]: New session 8 of user core. Jan 15 00:33:14.018053 kernel: audit: type=1327 audit(1768437193.986:753): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:33:13.986000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:33:14.021430 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 15 00:33:14.031716 kernel: audit: type=1105 audit(1768437194.025:754): pid=4836 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:14.025000 audit[4836]: USER_START pid=4836 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:14.039188 kernel: audit: type=1103 audit(1768437194.031:755): pid=4840 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:14.031000 audit[4840]: CRED_ACQ pid=4840 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:14.842673 sshd[4840]: Connection closed by 20.161.92.111 port 47584 Jan 15 00:33:14.844352 sshd-session[4836]: pam_unix(sshd:session): session closed for user core Jan 15 00:33:14.850000 audit[4836]: USER_END pid=4836 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:14.858094 kernel: audit: type=1106 audit(1768437194.850:756): pid=4836 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:14.858231 kernel: audit: type=1104 audit(1768437194.850:757): pid=4836 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:14.850000 audit[4836]: CRED_DISP pid=4836 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:14.856427 systemd[1]: sshd@7-164.92.64.55:22-20.161.92.111:47584.service: Deactivated successfully. Jan 15 00:33:14.855000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-164.92.64.55:22-20.161.92.111:47584 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:33:14.863718 systemd[1]: session-8.scope: Deactivated successfully. Jan 15 00:33:14.867822 systemd-logind[1592]: Session 8 logged out. Waiting for processes to exit. Jan 15 00:33:14.870303 systemd-logind[1592]: Removed session 8. Jan 15 00:33:16.989201 kubelet[2785]: E0115 00:33:16.989117 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-586c796f68-7pr9q" podUID="3b2df0f5-3af7-40bf-8f6e-f5e8397900ad" Jan 15 00:33:17.987883 kubelet[2785]: E0115 00:33:17.987818 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rjlcz" podUID="14ced92d-cf89-41f0-99bf-edc9c92a737b" Jan 15 00:33:19.920374 systemd[1]: Started sshd@8-164.92.64.55:22-20.161.92.111:47588.service - OpenSSH per-connection server daemon (20.161.92.111:47588). Jan 15 00:33:19.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-164.92.64.55:22-20.161.92.111:47588 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:33:19.923164 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 15 00:33:19.923283 kernel: audit: type=1130 audit(1768437199.919:759): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-164.92.64.55:22-20.161.92.111:47588 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:33:20.313000 audit[4881]: USER_ACCT pid=4881 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:20.319193 kernel: audit: type=1101 audit(1768437200.313:760): pid=4881 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:20.319240 sshd[4881]: Accepted publickey for core from 20.161.92.111 port 47588 ssh2: RSA SHA256:EI6zOIQQ/KS+Bep1WC3vAdrkfex1g89wTRiiOfwntxI Jan 15 00:33:20.320000 audit[4881]: CRED_ACQ pid=4881 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:20.321875 sshd-session[4881]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:33:20.327155 kernel: audit: type=1103 audit(1768437200.320:761): pid=4881 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:20.332086 kernel: audit: type=1006 audit(1768437200.320:762): pid=4881 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Jan 15 00:33:20.320000 audit[4881]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe0f32eee0 a2=3 a3=0 items=0 ppid=1 pid=4881 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:33:20.339148 kernel: audit: type=1300 audit(1768437200.320:762): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe0f32eee0 a2=3 a3=0 items=0 ppid=1 pid=4881 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:33:20.339262 systemd-logind[1592]: New session 9 of user core. Jan 15 00:33:20.320000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:33:20.344077 kernel: audit: type=1327 audit(1768437200.320:762): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:33:20.347135 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 15 00:33:20.352000 audit[4881]: USER_START pid=4881 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:20.359102 kernel: audit: type=1105 audit(1768437200.352:763): pid=4881 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:20.358000 audit[4884]: CRED_ACQ pid=4884 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:20.366076 kernel: audit: type=1103 audit(1768437200.358:764): pid=4884 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:20.643352 sshd[4884]: Connection closed by 20.161.92.111 port 47588 Jan 15 00:33:20.645557 sshd-session[4881]: pam_unix(sshd:session): session closed for user core Jan 15 00:33:20.647000 audit[4881]: USER_END pid=4881 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:20.654210 kernel: audit: type=1106 audit(1768437200.647:765): pid=4881 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:20.652301 systemd-logind[1592]: Session 9 logged out. Waiting for processes to exit. Jan 15 00:33:20.653474 systemd[1]: sshd@8-164.92.64.55:22-20.161.92.111:47588.service: Deactivated successfully. Jan 15 00:33:20.647000 audit[4881]: CRED_DISP pid=4881 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:20.659968 systemd[1]: session-9.scope: Deactivated successfully. Jan 15 00:33:20.660255 kernel: audit: type=1104 audit(1768437200.647:766): pid=4881 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:20.652000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-164.92.64.55:22-20.161.92.111:47588 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:33:20.669475 systemd-logind[1592]: Removed session 9. Jan 15 00:33:21.988051 kubelet[2785]: E0115 00:33:21.987719 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 15 00:33:21.988945 kubelet[2785]: E0115 00:33:21.988917 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 15 00:33:22.988779 kubelet[2785]: E0115 00:33:22.988690 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d4f97847b-lrvs5" podUID="5adbdfdd-96a2-41eb-8663-7460bd3865b9" Jan 15 00:33:22.988779 kubelet[2785]: E0115 00:33:22.988971 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-fmmn9" podUID="a6d2aaa6-9d35-4f8a-99b8-b75c10539cd4" Jan 15 00:33:23.986830 kubelet[2785]: E0115 00:33:23.986080 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 15 00:33:24.991358 kubelet[2785]: E0115 00:33:24.990266 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-586c796f68-gf7fx" podUID="b432d05d-ed71-4758-b9af-7738bf34afb7" Jan 15 00:33:25.724893 systemd[1]: Started sshd@9-164.92.64.55:22-20.161.92.111:48584.service - OpenSSH per-connection server daemon (20.161.92.111:48584). Jan 15 00:33:25.728045 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 15 00:33:25.728161 kernel: audit: type=1130 audit(1768437205.725:768): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-164.92.64.55:22-20.161.92.111:48584 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:33:25.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-164.92.64.55:22-20.161.92.111:48584 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:33:26.089000 audit[4898]: USER_ACCT pid=4898 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:26.096174 sshd-session[4898]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:33:26.097215 kernel: audit: type=1101 audit(1768437206.089:769): pid=4898 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:26.097275 sshd[4898]: Accepted publickey for core from 20.161.92.111 port 48584 ssh2: RSA SHA256:EI6zOIQQ/KS+Bep1WC3vAdrkfex1g89wTRiiOfwntxI Jan 15 00:33:26.094000 audit[4898]: CRED_ACQ pid=4898 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:26.104300 kernel: audit: type=1103 audit(1768437206.094:770): pid=4898 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:26.108046 kernel: audit: type=1006 audit(1768437206.094:771): pid=4898 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jan 15 00:33:26.113084 kernel: audit: type=1300 audit(1768437206.094:771): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffaa9cd890 a2=3 a3=0 items=0 ppid=1 pid=4898 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:33:26.094000 audit[4898]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffaa9cd890 a2=3 a3=0 items=0 ppid=1 pid=4898 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:33:26.117717 kernel: audit: type=1327 audit(1768437206.094:771): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:33:26.094000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:33:26.119813 systemd-logind[1592]: New session 10 of user core. Jan 15 00:33:26.122303 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 15 00:33:26.134120 kernel: audit: type=1105 audit(1768437206.126:772): pid=4898 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:26.126000 audit[4898]: USER_START pid=4898 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:26.134000 audit[4901]: CRED_ACQ pid=4901 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:26.140167 kernel: audit: type=1103 audit(1768437206.134:773): pid=4901 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:26.430676 sshd[4901]: Connection closed by 20.161.92.111 port 48584 Jan 15 00:33:26.431353 sshd-session[4898]: pam_unix(sshd:session): session closed for user core Jan 15 00:33:26.444845 kernel: audit: type=1106 audit(1768437206.433:774): pid=4898 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:26.444971 kernel: audit: type=1104 audit(1768437206.433:775): pid=4898 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:26.433000 audit[4898]: USER_END pid=4898 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:26.433000 audit[4898]: CRED_DISP pid=4898 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:26.440402 systemd[1]: sshd@9-164.92.64.55:22-20.161.92.111:48584.service: Deactivated successfully. Jan 15 00:33:26.443989 systemd[1]: session-10.scope: Deactivated successfully. Jan 15 00:33:26.439000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-164.92.64.55:22-20.161.92.111:48584 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:33:26.452471 systemd-logind[1592]: Session 10 logged out. Waiting for processes to exit. Jan 15 00:33:26.454180 systemd-logind[1592]: Removed session 10. Jan 15 00:33:26.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-164.92.64.55:22-20.161.92.111:48586 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:33:26.505150 systemd[1]: Started sshd@10-164.92.64.55:22-20.161.92.111:48586.service - OpenSSH per-connection server daemon (20.161.92.111:48586). Jan 15 00:33:26.870000 audit[4913]: USER_ACCT pid=4913 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:26.872588 sshd[4913]: Accepted publickey for core from 20.161.92.111 port 48586 ssh2: RSA SHA256:EI6zOIQQ/KS+Bep1WC3vAdrkfex1g89wTRiiOfwntxI Jan 15 00:33:26.872000 audit[4913]: CRED_ACQ pid=4913 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:26.872000 audit[4913]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc5aa14cb0 a2=3 a3=0 items=0 ppid=1 pid=4913 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:33:26.872000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:33:26.874465 sshd-session[4913]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:33:26.883471 systemd-logind[1592]: New session 11 of user core. Jan 15 00:33:26.889775 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 15 00:33:26.895000 audit[4913]: USER_START pid=4913 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:26.899000 audit[4916]: CRED_ACQ pid=4916 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:26.996071 kubelet[2785]: E0115 00:33:26.995552 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7fdcf9f989-nm8zk" podUID="a4b16ce3-dca7-42f9-90d7-10ddcc6423d9" Jan 15 00:33:27.283186 sshd[4916]: Connection closed by 20.161.92.111 port 48586 Jan 15 00:33:27.283645 sshd-session[4913]: pam_unix(sshd:session): session closed for user core Jan 15 00:33:27.286000 audit[4913]: USER_END pid=4913 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:27.287000 audit[4913]: CRED_DISP pid=4913 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:27.292828 systemd-logind[1592]: Session 11 logged out. Waiting for processes to exit. Jan 15 00:33:27.296625 systemd[1]: sshd@10-164.92.64.55:22-20.161.92.111:48586.service: Deactivated successfully. Jan 15 00:33:27.296000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-164.92.64.55:22-20.161.92.111:48586 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:33:27.300998 systemd[1]: session-11.scope: Deactivated successfully. Jan 15 00:33:27.304728 systemd-logind[1592]: Removed session 11. Jan 15 00:33:27.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-164.92.64.55:22-20.161.92.111:48600 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:33:27.358435 systemd[1]: Started sshd@11-164.92.64.55:22-20.161.92.111:48600.service - OpenSSH per-connection server daemon (20.161.92.111:48600). Jan 15 00:33:27.713000 audit[4926]: USER_ACCT pid=4926 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:27.714852 sshd[4926]: Accepted publickey for core from 20.161.92.111 port 48600 ssh2: RSA SHA256:EI6zOIQQ/KS+Bep1WC3vAdrkfex1g89wTRiiOfwntxI Jan 15 00:33:27.714000 audit[4926]: CRED_ACQ pid=4926 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:27.715000 audit[4926]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff112943f0 a2=3 a3=0 items=0 ppid=1 pid=4926 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:33:27.715000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:33:27.716475 sshd-session[4926]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:33:27.729118 systemd-logind[1592]: New session 12 of user core. Jan 15 00:33:27.736379 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 15 00:33:27.744000 audit[4926]: USER_START pid=4926 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:27.751000 audit[4929]: CRED_ACQ pid=4929 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:28.045304 sshd[4929]: Connection closed by 20.161.92.111 port 48600 Jan 15 00:33:28.048332 sshd-session[4926]: pam_unix(sshd:session): session closed for user core Jan 15 00:33:28.050000 audit[4926]: USER_END pid=4926 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:28.051000 audit[4926]: CRED_DISP pid=4926 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:28.059837 systemd[1]: sshd@11-164.92.64.55:22-20.161.92.111:48600.service: Deactivated successfully. Jan 15 00:33:28.060000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-164.92.64.55:22-20.161.92.111:48600 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:33:28.061157 systemd-logind[1592]: Session 12 logged out. Waiting for processes to exit. Jan 15 00:33:28.066429 systemd[1]: session-12.scope: Deactivated successfully. Jan 15 00:33:28.070077 systemd-logind[1592]: Removed session 12. Jan 15 00:33:28.992060 kubelet[2785]: E0115 00:33:28.991059 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-586c796f68-7pr9q" podUID="3b2df0f5-3af7-40bf-8f6e-f5e8397900ad" Jan 15 00:33:32.992730 kubelet[2785]: E0115 00:33:32.992476 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rjlcz" podUID="14ced92d-cf89-41f0-99bf-edc9c92a737b" Jan 15 00:33:33.128113 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 15 00:33:33.128292 kernel: audit: type=1130 audit(1768437213.125:795): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-164.92.64.55:22-20.161.92.111:53786 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:33:33.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-164.92.64.55:22-20.161.92.111:53786 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:33:33.126426 systemd[1]: Started sshd@12-164.92.64.55:22-20.161.92.111:53786.service - OpenSSH per-connection server daemon (20.161.92.111:53786). Jan 15 00:33:33.515000 audit[4942]: USER_ACCT pid=4942 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:33.523085 kernel: audit: type=1101 audit(1768437213.515:796): pid=4942 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:33.523252 sshd[4942]: Accepted publickey for core from 20.161.92.111 port 53786 ssh2: RSA SHA256:EI6zOIQQ/KS+Bep1WC3vAdrkfex1g89wTRiiOfwntxI Jan 15 00:33:33.525667 sshd-session[4942]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:33:33.523000 audit[4942]: CRED_ACQ pid=4942 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:33.533067 kernel: audit: type=1103 audit(1768437213.523:797): pid=4942 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:33.540062 kernel: audit: type=1006 audit(1768437213.524:798): pid=4942 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jan 15 00:33:33.540244 kernel: audit: type=1300 audit(1768437213.524:798): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcaa5422b0 a2=3 a3=0 items=0 ppid=1 pid=4942 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:33:33.524000 audit[4942]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcaa5422b0 a2=3 a3=0 items=0 ppid=1 pid=4942 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:33:33.524000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:33:33.545901 kernel: audit: type=1327 audit(1768437213.524:798): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:33:33.557132 systemd-logind[1592]: New session 13 of user core. Jan 15 00:33:33.562672 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 15 00:33:33.578112 kernel: audit: type=1105 audit(1768437213.569:799): pid=4942 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:33.569000 audit[4942]: USER_START pid=4942 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:33.579000 audit[4945]: CRED_ACQ pid=4945 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:33.588081 kernel: audit: type=1103 audit(1768437213.579:800): pid=4945 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:33.831163 sshd[4945]: Connection closed by 20.161.92.111 port 53786 Jan 15 00:33:33.831944 sshd-session[4942]: pam_unix(sshd:session): session closed for user core Jan 15 00:33:33.834000 audit[4942]: USER_END pid=4942 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:33.842076 kernel: audit: type=1106 audit(1768437213.834:801): pid=4942 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:33.847596 systemd[1]: sshd@12-164.92.64.55:22-20.161.92.111:53786.service: Deactivated successfully. Jan 15 00:33:33.840000 audit[4942]: CRED_DISP pid=4942 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:33.855050 kernel: audit: type=1104 audit(1768437213.840:802): pid=4942 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:33.858773 systemd[1]: session-13.scope: Deactivated successfully. Jan 15 00:33:33.847000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-164.92.64.55:22-20.161.92.111:53786 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:33:33.861871 systemd-logind[1592]: Session 13 logged out. Waiting for processes to exit. Jan 15 00:33:33.865622 systemd-logind[1592]: Removed session 13. Jan 15 00:33:34.990118 kubelet[2785]: E0115 00:33:34.989762 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d4f97847b-lrvs5" podUID="5adbdfdd-96a2-41eb-8663-7460bd3865b9" Jan 15 00:33:35.987545 containerd[1616]: time="2026-01-15T00:33:35.987406641Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 15 00:33:36.339056 containerd[1616]: time="2026-01-15T00:33:36.338791002Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:33:36.339780 containerd[1616]: time="2026-01-15T00:33:36.339728133Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 15 00:33:36.340132 containerd[1616]: time="2026-01-15T00:33:36.339766108Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 15 00:33:36.340325 kubelet[2785]: E0115 00:33:36.340281 2785 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 15 00:33:36.341327 kubelet[2785]: E0115 00:33:36.340345 2785 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 15 00:33:36.341327 kubelet[2785]: E0115 00:33:36.340549 2785 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mpqkw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-fmmn9_calico-system(a6d2aaa6-9d35-4f8a-99b8-b75c10539cd4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 15 00:33:36.342136 kubelet[2785]: E0115 00:33:36.342094 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-fmmn9" podUID="a6d2aaa6-9d35-4f8a-99b8-b75c10539cd4" Jan 15 00:33:37.988287 kubelet[2785]: E0115 00:33:37.988237 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-586c796f68-gf7fx" podUID="b432d05d-ed71-4758-b9af-7738bf34afb7" Jan 15 00:33:38.906390 systemd[1]: Started sshd@13-164.92.64.55:22-20.161.92.111:53794.service - OpenSSH per-connection server daemon (20.161.92.111:53794). Jan 15 00:33:38.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-164.92.64.55:22-20.161.92.111:53794 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:33:38.908375 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 15 00:33:38.913220 kernel: audit: type=1130 audit(1768437218.906:804): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-164.92.64.55:22-20.161.92.111:53794 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:33:39.307129 kernel: audit: type=1101 audit(1768437219.300:805): pid=4969 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:39.300000 audit[4969]: USER_ACCT pid=4969 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:39.305226 sshd-session[4969]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:33:39.307806 sshd[4969]: Accepted publickey for core from 20.161.92.111 port 53794 ssh2: RSA SHA256:EI6zOIQQ/KS+Bep1WC3vAdrkfex1g89wTRiiOfwntxI Jan 15 00:33:39.303000 audit[4969]: CRED_ACQ pid=4969 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:39.313425 kernel: audit: type=1103 audit(1768437219.303:806): pid=4969 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:39.321202 kernel: audit: type=1006 audit(1768437219.303:807): pid=4969 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jan 15 00:33:39.321381 systemd-logind[1592]: New session 14 of user core. Jan 15 00:33:39.303000 audit[4969]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff264dae10 a2=3 a3=0 items=0 ppid=1 pid=4969 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:33:39.329507 kernel: audit: type=1300 audit(1768437219.303:807): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff264dae10 a2=3 a3=0 items=0 ppid=1 pid=4969 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:33:39.329637 kernel: audit: type=1327 audit(1768437219.303:807): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:33:39.303000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:33:39.330006 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 15 00:33:39.333000 audit[4969]: USER_START pid=4969 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:39.341208 kernel: audit: type=1105 audit(1768437219.333:808): pid=4969 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:39.340000 audit[4972]: CRED_ACQ pid=4972 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:39.346071 kernel: audit: type=1103 audit(1768437219.340:809): pid=4972 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:39.620557 sshd[4972]: Connection closed by 20.161.92.111 port 53794 Jan 15 00:33:39.623288 sshd-session[4969]: pam_unix(sshd:session): session closed for user core Jan 15 00:33:39.633941 kernel: audit: type=1106 audit(1768437219.625:810): pid=4969 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:39.625000 audit[4969]: USER_END pid=4969 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:39.631059 systemd[1]: sshd@13-164.92.64.55:22-20.161.92.111:53794.service: Deactivated successfully. Jan 15 00:33:39.633291 systemd[1]: session-14.scope: Deactivated successfully. Jan 15 00:33:39.625000 audit[4969]: CRED_DISP pid=4969 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:39.645173 kernel: audit: type=1104 audit(1768437219.625:811): pid=4969 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:39.645097 systemd-logind[1592]: Session 14 logged out. Waiting for processes to exit. Jan 15 00:33:39.630000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-164.92.64.55:22-20.161.92.111:53794 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:33:39.647247 systemd-logind[1592]: Removed session 14. Jan 15 00:33:39.988406 containerd[1616]: time="2026-01-15T00:33:39.987871667Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 15 00:33:40.410898 containerd[1616]: time="2026-01-15T00:33:40.410836844Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:33:40.411587 containerd[1616]: time="2026-01-15T00:33:40.411528458Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 15 00:33:40.411705 containerd[1616]: time="2026-01-15T00:33:40.411621934Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 15 00:33:40.411961 kubelet[2785]: E0115 00:33:40.411916 2785 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 15 00:33:40.412408 kubelet[2785]: E0115 00:33:40.411975 2785 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 15 00:33:40.413187 kubelet[2785]: E0115 00:33:40.413131 2785 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:1c85d4f6dcf249e199926edb662227fb,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xp2m5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7fdcf9f989-nm8zk_calico-system(a4b16ce3-dca7-42f9-90d7-10ddcc6423d9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 15 00:33:40.416585 containerd[1616]: time="2026-01-15T00:33:40.416319836Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 15 00:33:40.811051 containerd[1616]: time="2026-01-15T00:33:40.810328164Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:33:40.812181 containerd[1616]: time="2026-01-15T00:33:40.811997825Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 15 00:33:40.812181 containerd[1616]: time="2026-01-15T00:33:40.812050198Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 15 00:33:40.813329 kubelet[2785]: E0115 00:33:40.813255 2785 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 15 00:33:40.813473 kubelet[2785]: E0115 00:33:40.813345 2785 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 15 00:33:40.813543 kubelet[2785]: E0115 00:33:40.813492 2785 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xp2m5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7fdcf9f989-nm8zk_calico-system(a4b16ce3-dca7-42f9-90d7-10ddcc6423d9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 15 00:33:40.815376 kubelet[2785]: E0115 00:33:40.815264 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7fdcf9f989-nm8zk" podUID="a4b16ce3-dca7-42f9-90d7-10ddcc6423d9" Jan 15 00:33:43.987531 kubelet[2785]: E0115 00:33:43.987413 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-586c796f68-7pr9q" podUID="3b2df0f5-3af7-40bf-8f6e-f5e8397900ad" Jan 15 00:33:44.705182 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 15 00:33:44.705332 kernel: audit: type=1130 audit(1768437224.698:813): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-164.92.64.55:22-20.161.92.111:37356 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:33:44.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-164.92.64.55:22-20.161.92.111:37356 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:33:44.699736 systemd[1]: Started sshd@14-164.92.64.55:22-20.161.92.111:37356.service - OpenSSH per-connection server daemon (20.161.92.111:37356). Jan 15 00:33:45.085086 sshd[5011]: Accepted publickey for core from 20.161.92.111 port 37356 ssh2: RSA SHA256:EI6zOIQQ/KS+Bep1WC3vAdrkfex1g89wTRiiOfwntxI Jan 15 00:33:45.083000 audit[5011]: USER_ACCT pid=5011 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:45.090142 kernel: audit: type=1101 audit(1768437225.083:814): pid=5011 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:45.091557 sshd-session[5011]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:33:45.089000 audit[5011]: CRED_ACQ pid=5011 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:45.099203 kernel: audit: type=1103 audit(1768437225.089:815): pid=5011 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:45.099349 kernel: audit: type=1006 audit(1768437225.089:816): pid=5011 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jan 15 00:33:45.089000 audit[5011]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdd92a28d0 a2=3 a3=0 items=0 ppid=1 pid=5011 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:33:45.108089 kernel: audit: type=1300 audit(1768437225.089:816): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdd92a28d0 a2=3 a3=0 items=0 ppid=1 pid=5011 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:33:45.089000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:33:45.111067 kernel: audit: type=1327 audit(1768437225.089:816): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:33:45.114428 systemd-logind[1592]: New session 15 of user core. Jan 15 00:33:45.118394 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 15 00:33:45.122000 audit[5011]: USER_START pid=5011 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:45.130450 kernel: audit: type=1105 audit(1768437225.122:817): pid=5011 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:45.130000 audit[5014]: CRED_ACQ pid=5014 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:45.136055 kernel: audit: type=1103 audit(1768437225.130:818): pid=5014 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:45.432750 sshd[5014]: Connection closed by 20.161.92.111 port 37356 Jan 15 00:33:45.434355 sshd-session[5011]: pam_unix(sshd:session): session closed for user core Jan 15 00:33:45.435000 audit[5011]: USER_END pid=5011 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:45.443069 kernel: audit: type=1106 audit(1768437225.435:819): pid=5011 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:45.443571 systemd[1]: sshd@14-164.92.64.55:22-20.161.92.111:37356.service: Deactivated successfully. Jan 15 00:33:45.436000 audit[5011]: CRED_DISP pid=5011 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:45.449744 systemd[1]: session-15.scope: Deactivated successfully. Jan 15 00:33:45.453306 kernel: audit: type=1104 audit(1768437225.436:820): pid=5011 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:45.455632 systemd-logind[1592]: Session 15 logged out. Waiting for processes to exit. Jan 15 00:33:45.442000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-164.92.64.55:22-20.161.92.111:37356 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:33:45.461029 systemd-logind[1592]: Removed session 15. Jan 15 00:33:46.989407 containerd[1616]: time="2026-01-15T00:33:46.988764250Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 15 00:33:47.305139 containerd[1616]: time="2026-01-15T00:33:47.305059336Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:33:47.306122 containerd[1616]: time="2026-01-15T00:33:47.306066816Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 15 00:33:47.306287 containerd[1616]: time="2026-01-15T00:33:47.306190722Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 15 00:33:47.308261 kubelet[2785]: E0115 00:33:47.308190 2785 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 15 00:33:47.309393 kubelet[2785]: E0115 00:33:47.309055 2785 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 15 00:33:47.309393 kubelet[2785]: E0115 00:33:47.309268 2785 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-brgll,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rjlcz_calico-system(14ced92d-cf89-41f0-99bf-edc9c92a737b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 15 00:33:47.311984 containerd[1616]: time="2026-01-15T00:33:47.311610370Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 15 00:33:47.647423 containerd[1616]: time="2026-01-15T00:33:47.646841157Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:33:47.647930 containerd[1616]: time="2026-01-15T00:33:47.647859089Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 15 00:33:47.648924 containerd[1616]: time="2026-01-15T00:33:47.647974038Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 15 00:33:47.649044 kubelet[2785]: E0115 00:33:47.648178 2785 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 15 00:33:47.649044 kubelet[2785]: E0115 00:33:47.648236 2785 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 15 00:33:47.649044 kubelet[2785]: E0115 00:33:47.648359 2785 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-brgll,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rjlcz_calico-system(14ced92d-cf89-41f0-99bf-edc9c92a737b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 15 00:33:47.650029 kubelet[2785]: E0115 00:33:47.649944 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rjlcz" podUID="14ced92d-cf89-41f0-99bf-edc9c92a737b" Jan 15 00:33:47.990156 kubelet[2785]: E0115 00:33:47.987889 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-fmmn9" podUID="a6d2aaa6-9d35-4f8a-99b8-b75c10539cd4" Jan 15 00:33:47.991332 containerd[1616]: time="2026-01-15T00:33:47.991071844Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 15 00:33:48.365116 containerd[1616]: time="2026-01-15T00:33:48.364599475Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:33:48.365484 containerd[1616]: time="2026-01-15T00:33:48.365437906Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 15 00:33:48.365593 containerd[1616]: time="2026-01-15T00:33:48.365538213Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 15 00:33:48.366071 kubelet[2785]: E0115 00:33:48.365990 2785 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 15 00:33:48.366722 kubelet[2785]: E0115 00:33:48.366458 2785 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 15 00:33:48.366722 kubelet[2785]: E0115 00:33:48.366643 2785 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ts2lb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7d4f97847b-lrvs5_calico-system(5adbdfdd-96a2-41eb-8663-7460bd3865b9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 15 00:33:48.368404 kubelet[2785]: E0115 00:33:48.368318 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d4f97847b-lrvs5" podUID="5adbdfdd-96a2-41eb-8663-7460bd3865b9" Jan 15 00:33:49.037560 containerd[1616]: time="2026-01-15T00:33:49.037501781Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 15 00:33:49.409738 containerd[1616]: time="2026-01-15T00:33:49.409403356Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:33:49.410946 containerd[1616]: time="2026-01-15T00:33:49.410685959Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 15 00:33:49.410946 containerd[1616]: time="2026-01-15T00:33:49.410725429Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 15 00:33:49.411071 kubelet[2785]: E0115 00:33:49.410971 2785 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 00:33:49.411459 kubelet[2785]: E0115 00:33:49.411164 2785 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 00:33:49.411877 kubelet[2785]: E0115 00:33:49.411791 2785 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tl2m7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-586c796f68-gf7fx_calico-apiserver(b432d05d-ed71-4758-b9af-7738bf34afb7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 15 00:33:49.413147 kubelet[2785]: E0115 00:33:49.413107 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-586c796f68-gf7fx" podUID="b432d05d-ed71-4758-b9af-7738bf34afb7" Jan 15 00:33:50.509699 systemd[1]: Started sshd@15-164.92.64.55:22-20.161.92.111:37358.service - OpenSSH per-connection server daemon (20.161.92.111:37358). Jan 15 00:33:50.517201 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 15 00:33:50.517385 kernel: audit: type=1130 audit(1768437230.510:822): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-164.92.64.55:22-20.161.92.111:37358 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:33:50.510000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-164.92.64.55:22-20.161.92.111:37358 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:33:50.948000 audit[5033]: USER_ACCT pid=5033 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:50.950519 sshd[5033]: Accepted publickey for core from 20.161.92.111 port 37358 ssh2: RSA SHA256:EI6zOIQQ/KS+Bep1WC3vAdrkfex1g89wTRiiOfwntxI Jan 15 00:33:50.955061 kernel: audit: type=1101 audit(1768437230.948:823): pid=5033 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:50.957552 sshd-session[5033]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:33:50.955000 audit[5033]: CRED_ACQ pid=5033 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:50.966192 kernel: audit: type=1103 audit(1768437230.955:824): pid=5033 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:50.977881 kernel: audit: type=1006 audit(1768437230.955:825): pid=5033 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jan 15 00:33:50.978076 kernel: audit: type=1300 audit(1768437230.955:825): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdaf8d3fe0 a2=3 a3=0 items=0 ppid=1 pid=5033 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:33:50.955000 audit[5033]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdaf8d3fe0 a2=3 a3=0 items=0 ppid=1 pid=5033 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:33:50.976889 systemd-logind[1592]: New session 16 of user core. Jan 15 00:33:50.985916 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 15 00:33:50.955000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:33:50.990610 kernel: audit: type=1327 audit(1768437230.955:825): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:33:51.000000 audit[5033]: USER_START pid=5033 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:51.008178 kernel: audit: type=1105 audit(1768437231.000:826): pid=5033 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:51.009000 audit[5036]: CRED_ACQ pid=5036 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:51.019143 kernel: audit: type=1103 audit(1768437231.009:827): pid=5036 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:51.342490 sshd[5036]: Connection closed by 20.161.92.111 port 37358 Jan 15 00:33:51.342959 sshd-session[5033]: pam_unix(sshd:session): session closed for user core Jan 15 00:33:51.344000 audit[5033]: USER_END pid=5033 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:51.350377 systemd-logind[1592]: Session 16 logged out. Waiting for processes to exit. Jan 15 00:33:51.351123 kernel: audit: type=1106 audit(1768437231.344:828): pid=5033 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:51.344000 audit[5033]: CRED_DISP pid=5033 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:51.353437 systemd[1]: sshd@15-164.92.64.55:22-20.161.92.111:37358.service: Deactivated successfully. Jan 15 00:33:51.357965 systemd[1]: session-16.scope: Deactivated successfully. Jan 15 00:33:51.358825 kernel: audit: type=1104 audit(1768437231.344:829): pid=5033 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:51.351000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-164.92.64.55:22-20.161.92.111:37358 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:33:51.365453 systemd-logind[1592]: Removed session 16. Jan 15 00:33:51.422000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-164.92.64.55:22-20.161.92.111:37374 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:33:51.423464 systemd[1]: Started sshd@16-164.92.64.55:22-20.161.92.111:37374.service - OpenSSH per-connection server daemon (20.161.92.111:37374). Jan 15 00:33:51.782000 audit[5048]: USER_ACCT pid=5048 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:51.785501 sshd[5048]: Accepted publickey for core from 20.161.92.111 port 37374 ssh2: RSA SHA256:EI6zOIQQ/KS+Bep1WC3vAdrkfex1g89wTRiiOfwntxI Jan 15 00:33:51.786000 audit[5048]: CRED_ACQ pid=5048 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:51.786000 audit[5048]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeb5e656e0 a2=3 a3=0 items=0 ppid=1 pid=5048 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:33:51.786000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:33:51.789212 sshd-session[5048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:33:51.803218 systemd-logind[1592]: New session 17 of user core. Jan 15 00:33:51.809374 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 15 00:33:51.817000 audit[5048]: USER_START pid=5048 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:51.821000 audit[5051]: CRED_ACQ pid=5051 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:51.989796 kubelet[2785]: E0115 00:33:51.989666 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7fdcf9f989-nm8zk" podUID="a4b16ce3-dca7-42f9-90d7-10ddcc6423d9" Jan 15 00:33:52.248193 sshd[5051]: Connection closed by 20.161.92.111 port 37374 Jan 15 00:33:52.249343 sshd-session[5048]: pam_unix(sshd:session): session closed for user core Jan 15 00:33:52.250000 audit[5048]: USER_END pid=5048 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:52.251000 audit[5048]: CRED_DISP pid=5048 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:52.256871 systemd[1]: sshd@16-164.92.64.55:22-20.161.92.111:37374.service: Deactivated successfully. Jan 15 00:33:52.257000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-164.92.64.55:22-20.161.92.111:37374 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:33:52.260321 systemd[1]: session-17.scope: Deactivated successfully. Jan 15 00:33:52.262645 systemd-logind[1592]: Session 17 logged out. Waiting for processes to exit. Jan 15 00:33:52.265299 systemd-logind[1592]: Removed session 17. Jan 15 00:33:52.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-164.92.64.55:22-20.161.92.111:37388 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:33:52.322519 systemd[1]: Started sshd@17-164.92.64.55:22-20.161.92.111:37388.service - OpenSSH per-connection server daemon (20.161.92.111:37388). Jan 15 00:33:52.738000 audit[5060]: USER_ACCT pid=5060 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:52.740435 sshd[5060]: Accepted publickey for core from 20.161.92.111 port 37388 ssh2: RSA SHA256:EI6zOIQQ/KS+Bep1WC3vAdrkfex1g89wTRiiOfwntxI Jan 15 00:33:52.741000 audit[5060]: CRED_ACQ pid=5060 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:52.741000 audit[5060]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffebdb04b40 a2=3 a3=0 items=0 ppid=1 pid=5060 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:33:52.741000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:33:52.742695 sshd-session[5060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:33:52.757116 systemd-logind[1592]: New session 18 of user core. Jan 15 00:33:52.760843 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 15 00:33:52.765000 audit[5060]: USER_START pid=5060 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:52.771000 audit[5077]: CRED_ACQ pid=5077 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:52.987791 kubelet[2785]: E0115 00:33:52.987723 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 15 00:33:53.987000 audit[5088]: NETFILTER_CFG table=filter:144 family=2 entries=26 op=nft_register_rule pid=5088 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:33:53.987000 audit[5088]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffc1b7d76a0 a2=0 a3=7ffc1b7d768c items=0 ppid=2937 pid=5088 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:33:53.987000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:33:53.997000 audit[5088]: NETFILTER_CFG table=nat:145 family=2 entries=20 op=nft_register_rule pid=5088 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:33:53.997000 audit[5088]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc1b7d76a0 a2=0 a3=0 items=0 ppid=2937 pid=5088 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:33:53.997000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:33:54.000176 sshd[5077]: Connection closed by 20.161.92.111 port 37388 Jan 15 00:33:54.004968 sshd-session[5060]: pam_unix(sshd:session): session closed for user core Jan 15 00:33:54.011000 audit[5060]: USER_END pid=5060 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:54.012000 audit[5060]: CRED_DISP pid=5060 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:54.019375 systemd[1]: sshd@17-164.92.64.55:22-20.161.92.111:37388.service: Deactivated successfully. Jan 15 00:33:54.019000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-164.92.64.55:22-20.161.92.111:37388 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:33:54.026417 systemd[1]: session-18.scope: Deactivated successfully. Jan 15 00:33:54.029062 systemd-logind[1592]: Session 18 logged out. Waiting for processes to exit. Jan 15 00:33:54.033047 systemd-logind[1592]: Removed session 18. Jan 15 00:33:54.043000 audit[5093]: NETFILTER_CFG table=filter:146 family=2 entries=38 op=nft_register_rule pid=5093 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:33:54.043000 audit[5093]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffe0aba9a90 a2=0 a3=7ffe0aba9a7c items=0 ppid=2937 pid=5093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:33:54.043000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:33:54.060000 audit[5093]: NETFILTER_CFG table=nat:147 family=2 entries=20 op=nft_register_rule pid=5093 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:33:54.060000 audit[5093]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe0aba9a90 a2=0 a3=0 items=0 ppid=2937 pid=5093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:33:54.060000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:33:54.073167 systemd[1]: Started sshd@18-164.92.64.55:22-20.161.92.111:56404.service - OpenSSH per-connection server daemon (20.161.92.111:56404). Jan 15 00:33:54.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-164.92.64.55:22-20.161.92.111:56404 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:33:54.478000 audit[5095]: USER_ACCT pid=5095 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:54.481067 sshd[5095]: Accepted publickey for core from 20.161.92.111 port 56404 ssh2: RSA SHA256:EI6zOIQQ/KS+Bep1WC3vAdrkfex1g89wTRiiOfwntxI Jan 15 00:33:54.480000 audit[5095]: CRED_ACQ pid=5095 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:54.481000 audit[5095]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcbde74e50 a2=3 a3=0 items=0 ppid=1 pid=5095 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:33:54.481000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:33:54.484013 sshd-session[5095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:33:54.495794 systemd-logind[1592]: New session 19 of user core. Jan 15 00:33:54.499364 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 15 00:33:54.503000 audit[5095]: USER_START pid=5095 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:54.507000 audit[5100]: CRED_ACQ pid=5100 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:55.086597 sshd[5100]: Connection closed by 20.161.92.111 port 56404 Jan 15 00:33:55.087344 sshd-session[5095]: pam_unix(sshd:session): session closed for user core Jan 15 00:33:55.090000 audit[5095]: USER_END pid=5095 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:55.090000 audit[5095]: CRED_DISP pid=5095 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:55.096816 systemd-logind[1592]: Session 19 logged out. Waiting for processes to exit. Jan 15 00:33:55.097487 systemd[1]: sshd@18-164.92.64.55:22-20.161.92.111:56404.service: Deactivated successfully. Jan 15 00:33:55.096000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-164.92.64.55:22-20.161.92.111:56404 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:33:55.101852 systemd[1]: session-19.scope: Deactivated successfully. Jan 15 00:33:55.106764 systemd-logind[1592]: Removed session 19. Jan 15 00:33:55.170000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-164.92.64.55:22-20.161.92.111:56408 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:33:55.171486 systemd[1]: Started sshd@19-164.92.64.55:22-20.161.92.111:56408.service - OpenSSH per-connection server daemon (20.161.92.111:56408). Jan 15 00:33:55.559000 audit[5110]: USER_ACCT pid=5110 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:55.561596 kernel: kauditd_printk_skb: 47 callbacks suppressed Jan 15 00:33:55.561675 kernel: audit: type=1101 audit(1768437235.559:863): pid=5110 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:55.566463 sshd[5110]: Accepted publickey for core from 20.161.92.111 port 56408 ssh2: RSA SHA256:EI6zOIQQ/KS+Bep1WC3vAdrkfex1g89wTRiiOfwntxI Jan 15 00:33:55.569624 sshd-session[5110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:33:55.567000 audit[5110]: CRED_ACQ pid=5110 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:55.576050 kernel: audit: type=1103 audit(1768437235.567:864): pid=5110 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:55.583012 kernel: audit: type=1006 audit(1768437235.567:865): pid=5110 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Jan 15 00:33:55.583138 kernel: audit: type=1300 audit(1768437235.567:865): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe86c68420 a2=3 a3=0 items=0 ppid=1 pid=5110 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:33:55.567000 audit[5110]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe86c68420 a2=3 a3=0 items=0 ppid=1 pid=5110 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:33:55.587621 systemd-logind[1592]: New session 20 of user core. Jan 15 00:33:55.567000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:33:55.591046 kernel: audit: type=1327 audit(1768437235.567:865): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:33:55.593348 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 15 00:33:55.597000 audit[5110]: USER_START pid=5110 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:55.604126 kernel: audit: type=1105 audit(1768437235.597:866): pid=5110 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:55.602000 audit[5113]: CRED_ACQ pid=5113 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:55.609060 kernel: audit: type=1103 audit(1768437235.602:867): pid=5113 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:55.876182 sshd[5113]: Connection closed by 20.161.92.111 port 56408 Jan 15 00:33:55.877863 sshd-session[5110]: pam_unix(sshd:session): session closed for user core Jan 15 00:33:55.879000 audit[5110]: USER_END pid=5110 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:55.884583 systemd[1]: sshd@19-164.92.64.55:22-20.161.92.111:56408.service: Deactivated successfully. Jan 15 00:33:55.885247 kernel: audit: type=1106 audit(1768437235.879:868): pid=5110 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:55.885293 kernel: audit: type=1104 audit(1768437235.879:869): pid=5110 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:55.879000 audit[5110]: CRED_DISP pid=5110 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:33:55.888997 systemd[1]: session-20.scope: Deactivated successfully. Jan 15 00:33:55.884000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-164.92.64.55:22-20.161.92.111:56408 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:33:55.895789 kernel: audit: type=1131 audit(1768437235.884:870): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-164.92.64.55:22-20.161.92.111:56408 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:33:55.899093 systemd-logind[1592]: Session 20 logged out. Waiting for processes to exit. Jan 15 00:33:55.900603 systemd-logind[1592]: Removed session 20. Jan 15 00:33:57.989805 containerd[1616]: time="2026-01-15T00:33:57.987934083Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 15 00:33:58.353205 containerd[1616]: time="2026-01-15T00:33:58.353147683Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:33:58.353993 containerd[1616]: time="2026-01-15T00:33:58.353934828Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 15 00:33:58.354168 containerd[1616]: time="2026-01-15T00:33:58.354056683Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 15 00:33:58.356056 kubelet[2785]: E0115 00:33:58.354385 2785 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 00:33:58.357308 kubelet[2785]: E0115 00:33:58.356679 2785 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 00:33:58.357308 kubelet[2785]: E0115 00:33:58.356903 2785 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8d7lk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-586c796f68-7pr9q_calico-apiserver(3b2df0f5-3af7-40bf-8f6e-f5e8397900ad): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 15 00:33:58.358686 kubelet[2785]: E0115 00:33:58.358189 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-586c796f68-7pr9q" podUID="3b2df0f5-3af7-40bf-8f6e-f5e8397900ad" Jan 15 00:34:00.337000 audit[5124]: NETFILTER_CFG table=filter:148 family=2 entries=26 op=nft_register_rule pid=5124 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:34:00.337000 audit[5124]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd40110840 a2=0 a3=7ffd4011082c items=0 ppid=2937 pid=5124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:34:00.337000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:34:00.344000 audit[5124]: NETFILTER_CFG table=nat:149 family=2 entries=104 op=nft_register_chain pid=5124 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:34:00.344000 audit[5124]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffd40110840 a2=0 a3=7ffd4011082c items=0 ppid=2937 pid=5124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:34:00.344000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:34:00.958917 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 15 00:34:00.959071 kernel: audit: type=1130 audit(1768437240.955:873): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-164.92.64.55:22-20.161.92.111:56410 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:34:00.955000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-164.92.64.55:22-20.161.92.111:56410 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:34:00.956352 systemd[1]: Started sshd@20-164.92.64.55:22-20.161.92.111:56410.service - OpenSSH per-connection server daemon (20.161.92.111:56410). Jan 15 00:34:00.991725 kubelet[2785]: E0115 00:34:00.991663 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d4f97847b-lrvs5" podUID="5adbdfdd-96a2-41eb-8663-7460bd3865b9" Jan 15 00:34:00.993541 kubelet[2785]: E0115 00:34:00.992011 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rjlcz" podUID="14ced92d-cf89-41f0-99bf-edc9c92a737b" Jan 15 00:34:01.372000 audit[5126]: USER_ACCT pid=5126 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:34:01.377786 sshd-session[5126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:34:01.379894 kernel: audit: type=1101 audit(1768437241.372:874): pid=5126 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:34:01.382137 kernel: audit: type=1103 audit(1768437241.376:875): pid=5126 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:34:01.376000 audit[5126]: CRED_ACQ pid=5126 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:34:01.382309 sshd[5126]: Accepted publickey for core from 20.161.92.111 port 56410 ssh2: RSA SHA256:EI6zOIQQ/KS+Bep1WC3vAdrkfex1g89wTRiiOfwntxI Jan 15 00:34:01.392092 kernel: audit: type=1006 audit(1768437241.376:876): pid=5126 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Jan 15 00:34:01.376000 audit[5126]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffec6f2f790 a2=3 a3=0 items=0 ppid=1 pid=5126 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:34:01.401602 systemd-logind[1592]: New session 21 of user core. Jan 15 00:34:01.403321 kernel: audit: type=1300 audit(1768437241.376:876): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffec6f2f790 a2=3 a3=0 items=0 ppid=1 pid=5126 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:34:01.403369 kernel: audit: type=1327 audit(1768437241.376:876): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:34:01.376000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:34:01.407927 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 15 00:34:01.416000 audit[5126]: USER_START pid=5126 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:34:01.425107 kernel: audit: type=1105 audit(1768437241.416:877): pid=5126 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:34:01.425241 kernel: audit: type=1103 audit(1768437241.423:878): pid=5129 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:34:01.423000 audit[5129]: CRED_ACQ pid=5129 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:34:01.709076 sshd[5129]: Connection closed by 20.161.92.111 port 56410 Jan 15 00:34:01.710450 sshd-session[5126]: pam_unix(sshd:session): session closed for user core Jan 15 00:34:01.713000 audit[5126]: USER_END pid=5126 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:34:01.719680 systemd[1]: sshd@20-164.92.64.55:22-20.161.92.111:56410.service: Deactivated successfully. Jan 15 00:34:01.722071 kernel: audit: type=1106 audit(1768437241.713:879): pid=5126 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:34:01.723639 systemd[1]: session-21.scope: Deactivated successfully. Jan 15 00:34:01.726062 systemd-logind[1592]: Session 21 logged out. Waiting for processes to exit. Jan 15 00:34:01.713000 audit[5126]: CRED_DISP pid=5126 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:34:01.731410 systemd-logind[1592]: Removed session 21. Jan 15 00:34:01.732098 kernel: audit: type=1104 audit(1768437241.713:880): pid=5126 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:34:01.717000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-164.92.64.55:22-20.161.92.111:56410 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:34:01.987173 kubelet[2785]: E0115 00:34:01.985733 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 15 00:34:01.988087 kubelet[2785]: E0115 00:34:01.987460 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-fmmn9" podUID="a6d2aaa6-9d35-4f8a-99b8-b75c10539cd4" Jan 15 00:34:03.987932 kubelet[2785]: E0115 00:34:03.987847 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-586c796f68-gf7fx" podUID="b432d05d-ed71-4758-b9af-7738bf34afb7" Jan 15 00:34:04.988585 kubelet[2785]: E0115 00:34:04.987624 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 15 00:34:06.789529 systemd[1]: Started sshd@21-164.92.64.55:22-20.161.92.111:36614.service - OpenSSH per-connection server daemon (20.161.92.111:36614). Jan 15 00:34:06.792217 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 15 00:34:06.792282 kernel: audit: type=1130 audit(1768437246.789:882): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-164.92.64.55:22-20.161.92.111:36614 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:34:06.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-164.92.64.55:22-20.161.92.111:36614 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:34:06.991742 kubelet[2785]: E0115 00:34:06.991677 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7fdcf9f989-nm8zk" podUID="a4b16ce3-dca7-42f9-90d7-10ddcc6423d9" Jan 15 00:34:07.177000 audit[5143]: USER_ACCT pid=5143 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:34:07.181470 sshd[5143]: Accepted publickey for core from 20.161.92.111 port 36614 ssh2: RSA SHA256:EI6zOIQQ/KS+Bep1WC3vAdrkfex1g89wTRiiOfwntxI Jan 15 00:34:07.183074 sshd-session[5143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:34:07.184770 kernel: audit: type=1101 audit(1768437247.177:883): pid=5143 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:34:07.180000 audit[5143]: CRED_ACQ pid=5143 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:34:07.192220 kernel: audit: type=1103 audit(1768437247.180:884): pid=5143 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:34:07.192323 kernel: audit: type=1006 audit(1768437247.180:885): pid=5143 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jan 15 00:34:07.180000 audit[5143]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc25ec6c00 a2=3 a3=0 items=0 ppid=1 pid=5143 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:34:07.195388 systemd-logind[1592]: New session 22 of user core. Jan 15 00:34:07.200610 kernel: audit: type=1300 audit(1768437247.180:885): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc25ec6c00 a2=3 a3=0 items=0 ppid=1 pid=5143 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:34:07.180000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:34:07.204410 kernel: audit: type=1327 audit(1768437247.180:885): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:34:07.204089 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 15 00:34:07.215000 audit[5143]: USER_START pid=5143 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:34:07.222069 kernel: audit: type=1105 audit(1768437247.215:886): pid=5143 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:34:07.225000 audit[5146]: CRED_ACQ pid=5146 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:34:07.235085 kernel: audit: type=1103 audit(1768437247.225:887): pid=5146 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:34:07.489734 sshd[5146]: Connection closed by 20.161.92.111 port 36614 Jan 15 00:34:07.490550 sshd-session[5143]: pam_unix(sshd:session): session closed for user core Jan 15 00:34:07.491000 audit[5143]: USER_END pid=5143 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:34:07.499137 kernel: audit: type=1106 audit(1768437247.491:888): pid=5143 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:34:07.497474 systemd-logind[1592]: Session 22 logged out. Waiting for processes to exit. Jan 15 00:34:07.491000 audit[5143]: CRED_DISP pid=5143 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:34:07.500143 systemd[1]: sshd@21-164.92.64.55:22-20.161.92.111:36614.service: Deactivated successfully. Jan 15 00:34:07.505690 systemd[1]: session-22.scope: Deactivated successfully. Jan 15 00:34:07.507712 kernel: audit: type=1104 audit(1768437247.491:889): pid=5143 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:34:07.499000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-164.92.64.55:22-20.161.92.111:36614 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:34:07.511433 systemd-logind[1592]: Removed session 22. Jan 15 00:34:09.988736 kubelet[2785]: E0115 00:34:09.988653 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-586c796f68-7pr9q" podUID="3b2df0f5-3af7-40bf-8f6e-f5e8397900ad" Jan 15 00:34:12.564000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-164.92.64.55:22-20.161.92.111:49972 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:34:12.565614 systemd[1]: Started sshd@22-164.92.64.55:22-20.161.92.111:49972.service - OpenSSH per-connection server daemon (20.161.92.111:49972). Jan 15 00:34:12.569265 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 15 00:34:12.569386 kernel: audit: type=1130 audit(1768437252.564:891): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-164.92.64.55:22-20.161.92.111:49972 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:34:12.929000 audit[5158]: USER_ACCT pid=5158 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:34:12.936163 sshd[5158]: Accepted publickey for core from 20.161.92.111 port 49972 ssh2: RSA SHA256:EI6zOIQQ/KS+Bep1WC3vAdrkfex1g89wTRiiOfwntxI Jan 15 00:34:12.938438 kernel: audit: type=1101 audit(1768437252.929:892): pid=5158 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:34:12.938000 audit[5158]: CRED_ACQ pid=5158 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:34:12.941502 sshd-session[5158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:34:12.946081 kernel: audit: type=1103 audit(1768437252.938:893): pid=5158 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:34:12.938000 audit[5158]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcc9ead140 a2=3 a3=0 items=0 ppid=1 pid=5158 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:34:12.958161 kernel: audit: type=1006 audit(1768437252.938:894): pid=5158 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jan 15 00:34:12.958244 kernel: audit: type=1300 audit(1768437252.938:894): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcc9ead140 a2=3 a3=0 items=0 ppid=1 pid=5158 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:34:12.963177 systemd-logind[1592]: New session 23 of user core. Jan 15 00:34:12.938000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:34:12.963778 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 15 00:34:12.967072 kernel: audit: type=1327 audit(1768437252.938:894): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:34:12.969000 audit[5158]: USER_START pid=5158 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:34:12.977184 kernel: audit: type=1105 audit(1768437252.969:895): pid=5158 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:34:12.976000 audit[5161]: CRED_ACQ pid=5161 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:34:12.984071 kernel: audit: type=1103 audit(1768437252.976:896): pid=5161 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:34:12.986082 kubelet[2785]: E0115 00:34:12.985860 2785 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 15 00:34:13.233184 sshd[5161]: Connection closed by 20.161.92.111 port 49972 Jan 15 00:34:13.234286 sshd-session[5158]: pam_unix(sshd:session): session closed for user core Jan 15 00:34:13.236000 audit[5158]: USER_END pid=5158 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:34:13.243068 kernel: audit: type=1106 audit(1768437253.236:897): pid=5158 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:34:13.237000 audit[5158]: CRED_DISP pid=5158 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:34:13.244651 systemd[1]: sshd@22-164.92.64.55:22-20.161.92.111:49972.service: Deactivated successfully. Jan 15 00:34:13.244000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-164.92.64.55:22-20.161.92.111:49972 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:34:13.249202 kernel: audit: type=1104 audit(1768437253.237:898): pid=5158 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 15 00:34:13.252853 systemd[1]: session-23.scope: Deactivated successfully. Jan 15 00:34:13.258742 systemd-logind[1592]: Session 23 logged out. Waiting for processes to exit. Jan 15 00:34:13.261642 systemd-logind[1592]: Removed session 23. Jan 15 00:34:13.986699 kubelet[2785]: E0115 00:34:13.986622 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-fmmn9" podUID="a6d2aaa6-9d35-4f8a-99b8-b75c10539cd4" Jan 15 00:34:14.990193 kubelet[2785]: E0115 00:34:14.990138 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-586c796f68-gf7fx" podUID="b432d05d-ed71-4758-b9af-7738bf34afb7" Jan 15 00:34:15.988903 kubelet[2785]: E0115 00:34:15.988816 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d4f97847b-lrvs5" podUID="5adbdfdd-96a2-41eb-8663-7460bd3865b9" Jan 15 00:34:15.989793 kubelet[2785]: E0115 00:34:15.989698 2785 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rjlcz" podUID="14ced92d-cf89-41f0-99bf-edc9c92a737b"