Dec 16 12:54:44.928773 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 12 15:17:57 -00 2025 Dec 16 12:54:44.928823 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=4dd8de2ff094d97322e7371b16ddee5fc8348868bcdd9ec7bcd11ea9d3933fee Dec 16 12:54:44.928840 kernel: BIOS-provided physical RAM map: Dec 16 12:54:44.928847 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 16 12:54:44.928854 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 16 12:54:44.928861 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 16 12:54:44.928870 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Dec 16 12:54:44.928882 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Dec 16 12:54:44.928889 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 16 12:54:44.928897 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 16 12:54:44.928907 kernel: NX (Execute Disable) protection: active Dec 16 12:54:44.928915 kernel: APIC: Static calls initialized Dec 16 12:54:44.928922 kernel: SMBIOS 2.8 present. Dec 16 12:54:44.928930 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Dec 16 12:54:44.928939 kernel: DMI: Memory slots populated: 1/1 Dec 16 12:54:44.928950 kernel: Hypervisor detected: KVM Dec 16 12:54:44.928961 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Dec 16 12:54:44.928969 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 16 12:54:44.928977 kernel: kvm-clock: using sched offset of 3783914486 cycles Dec 16 12:54:44.928987 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 16 12:54:44.928996 kernel: tsc: Detected 2494.174 MHz processor Dec 16 12:54:44.929005 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 16 12:54:44.929015 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 16 12:54:44.929026 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Dec 16 12:54:44.929035 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 16 12:54:44.929044 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 16 12:54:44.929053 kernel: ACPI: Early table checksum verification disabled Dec 16 12:54:44.929061 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Dec 16 12:54:44.929070 kernel: ACPI: RSDT 0x000000007FFE19FD 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:54:44.929079 kernel: ACPI: FACP 0x000000007FFE17E1 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:54:44.929091 kernel: ACPI: DSDT 0x000000007FFE0040 0017A1 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:54:44.929099 kernel: ACPI: FACS 0x000000007FFE0000 000040 Dec 16 12:54:44.929108 kernel: ACPI: APIC 0x000000007FFE1855 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:54:44.929117 kernel: ACPI: HPET 0x000000007FFE18D5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:54:44.929126 kernel: ACPI: SRAT 0x000000007FFE190D 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:54:44.929134 kernel: ACPI: WAET 0x000000007FFE19D5 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:54:44.929143 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe17e1-0x7ffe1854] Dec 16 12:54:44.929154 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe17e0] Dec 16 12:54:44.929163 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Dec 16 12:54:44.929172 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe1855-0x7ffe18d4] Dec 16 12:54:44.929185 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe18d5-0x7ffe190c] Dec 16 12:54:44.929194 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe190d-0x7ffe19d4] Dec 16 12:54:44.929203 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe19d5-0x7ffe19fc] Dec 16 12:54:44.929214 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 16 12:54:44.929223 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Dec 16 12:54:44.929232 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00001000-0x7ffdafff] Dec 16 12:54:44.929242 kernel: NODE_DATA(0) allocated [mem 0x7ffd3dc0-0x7ffdafff] Dec 16 12:54:44.929251 kernel: Zone ranges: Dec 16 12:54:44.929260 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 16 12:54:44.929272 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Dec 16 12:54:44.929281 kernel: Normal empty Dec 16 12:54:44.929290 kernel: Device empty Dec 16 12:54:44.929299 kernel: Movable zone start for each node Dec 16 12:54:44.929308 kernel: Early memory node ranges Dec 16 12:54:44.929317 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 16 12:54:44.929326 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Dec 16 12:54:44.929334 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Dec 16 12:54:44.929346 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 16 12:54:44.929355 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 16 12:54:44.929364 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Dec 16 12:54:44.929373 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 16 12:54:44.929386 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 16 12:54:44.929395 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 16 12:54:44.929406 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 16 12:54:44.929418 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 16 12:54:44.929427 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 16 12:54:44.929438 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 16 12:54:44.929448 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 16 12:54:44.929456 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 16 12:54:44.929466 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 16 12:54:44.929475 kernel: TSC deadline timer available Dec 16 12:54:44.929487 kernel: CPU topo: Max. logical packages: 1 Dec 16 12:54:44.929496 kernel: CPU topo: Max. logical dies: 1 Dec 16 12:54:44.929505 kernel: CPU topo: Max. dies per package: 1 Dec 16 12:54:44.929514 kernel: CPU topo: Max. threads per core: 1 Dec 16 12:54:44.929522 kernel: CPU topo: Num. cores per package: 2 Dec 16 12:54:44.929531 kernel: CPU topo: Num. threads per package: 2 Dec 16 12:54:44.929540 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Dec 16 12:54:44.929549 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 16 12:54:44.929561 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Dec 16 12:54:44.929570 kernel: Booting paravirtualized kernel on KVM Dec 16 12:54:44.929579 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 16 12:54:44.929588 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 16 12:54:44.929597 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Dec 16 12:54:44.929606 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Dec 16 12:54:44.929615 kernel: pcpu-alloc: [0] 0 1 Dec 16 12:54:44.929626 kernel: kvm-guest: PV spinlocks disabled, no host support Dec 16 12:54:44.931287 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=4dd8de2ff094d97322e7371b16ddee5fc8348868bcdd9ec7bcd11ea9d3933fee Dec 16 12:54:44.931300 kernel: random: crng init done Dec 16 12:54:44.931309 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 16 12:54:44.931319 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 16 12:54:44.931328 kernel: Fallback order for Node 0: 0 Dec 16 12:54:44.931338 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524153 Dec 16 12:54:44.931354 kernel: Policy zone: DMA32 Dec 16 12:54:44.931363 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 12:54:44.931372 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 16 12:54:44.931382 kernel: Kernel/User page tables isolation: enabled Dec 16 12:54:44.931391 kernel: ftrace: allocating 40103 entries in 157 pages Dec 16 12:54:44.931400 kernel: ftrace: allocated 157 pages with 5 groups Dec 16 12:54:44.931410 kernel: Dynamic Preempt: voluntary Dec 16 12:54:44.931421 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 12:54:44.931432 kernel: rcu: RCU event tracing is enabled. Dec 16 12:54:44.931441 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 16 12:54:44.931451 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 12:54:44.931460 kernel: Rude variant of Tasks RCU enabled. Dec 16 12:54:44.931469 kernel: Tracing variant of Tasks RCU enabled. Dec 16 12:54:44.931477 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 12:54:44.931489 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 16 12:54:44.931498 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 12:54:44.931514 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 12:54:44.931523 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 12:54:44.931533 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 16 12:54:44.931542 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 12:54:44.931551 kernel: Console: colour VGA+ 80x25 Dec 16 12:54:44.931563 kernel: printk: legacy console [tty0] enabled Dec 16 12:54:44.931572 kernel: printk: legacy console [ttyS0] enabled Dec 16 12:54:44.931581 kernel: ACPI: Core revision 20240827 Dec 16 12:54:44.931590 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 16 12:54:44.931608 kernel: APIC: Switch to symmetric I/O mode setup Dec 16 12:54:44.931621 kernel: x2apic enabled Dec 16 12:54:44.932662 kernel: APIC: Switched APIC routing to: physical x2apic Dec 16 12:54:44.932684 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 16 12:54:44.932695 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f3ba4d6b9, max_idle_ns: 440795310912 ns Dec 16 12:54:44.932710 kernel: Calibrating delay loop (skipped) preset value.. 4988.34 BogoMIPS (lpj=2494174) Dec 16 12:54:44.932726 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Dec 16 12:54:44.932736 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Dec 16 12:54:44.932746 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 16 12:54:44.932756 kernel: Spectre V2 : Mitigation: Retpolines Dec 16 12:54:44.932768 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Dec 16 12:54:44.932778 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Dec 16 12:54:44.932788 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 16 12:54:44.932798 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 16 12:54:44.932807 kernel: MDS: Mitigation: Clear CPU buffers Dec 16 12:54:44.932817 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 16 12:54:44.932827 kernel: active return thunk: its_return_thunk Dec 16 12:54:44.932839 kernel: ITS: Mitigation: Aligned branch/return thunks Dec 16 12:54:44.932849 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 16 12:54:44.932859 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 16 12:54:44.932869 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 16 12:54:44.932878 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 16 12:54:44.932888 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 16 12:54:44.932898 kernel: Freeing SMP alternatives memory: 32K Dec 16 12:54:44.932910 kernel: pid_max: default: 32768 minimum: 301 Dec 16 12:54:44.932919 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 16 12:54:44.932929 kernel: landlock: Up and running. Dec 16 12:54:44.932938 kernel: SELinux: Initializing. Dec 16 12:54:44.932948 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 16 12:54:44.932958 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 16 12:54:44.932967 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Dec 16 12:54:44.932979 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Dec 16 12:54:44.932989 kernel: signal: max sigframe size: 1776 Dec 16 12:54:44.932999 kernel: rcu: Hierarchical SRCU implementation. Dec 16 12:54:44.933010 kernel: rcu: Max phase no-delay instances is 400. Dec 16 12:54:44.933019 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 16 12:54:44.933029 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 16 12:54:44.933038 kernel: smp: Bringing up secondary CPUs ... Dec 16 12:54:44.933053 kernel: smpboot: x86: Booting SMP configuration: Dec 16 12:54:44.933063 kernel: .... node #0, CPUs: #1 Dec 16 12:54:44.933073 kernel: smp: Brought up 1 node, 2 CPUs Dec 16 12:54:44.933082 kernel: smpboot: Total of 2 processors activated (9976.69 BogoMIPS) Dec 16 12:54:44.933093 kernel: Memory: 1985340K/2096612K available (14336K kernel code, 2444K rwdata, 29892K rodata, 15464K init, 2576K bss, 106708K reserved, 0K cma-reserved) Dec 16 12:54:44.933103 kernel: devtmpfs: initialized Dec 16 12:54:44.933112 kernel: x86/mm: Memory block size: 128MB Dec 16 12:54:44.933124 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 12:54:44.933134 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 16 12:54:44.933144 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 12:54:44.933154 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 12:54:44.933163 kernel: audit: initializing netlink subsys (disabled) Dec 16 12:54:44.933174 kernel: audit: type=2000 audit(1765889681.518:1): state=initialized audit_enabled=0 res=1 Dec 16 12:54:44.933183 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 12:54:44.933195 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 16 12:54:44.933205 kernel: cpuidle: using governor menu Dec 16 12:54:44.933215 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 12:54:44.933224 kernel: dca service started, version 1.12.1 Dec 16 12:54:44.933234 kernel: PCI: Using configuration type 1 for base access Dec 16 12:54:44.933244 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 16 12:54:44.933254 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 12:54:44.933266 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 12:54:44.933276 kernel: ACPI: Added _OSI(Module Device) Dec 16 12:54:44.933285 kernel: ACPI: Added _OSI(Processor Device) Dec 16 12:54:44.933295 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 12:54:44.933305 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 16 12:54:44.933314 kernel: ACPI: Interpreter enabled Dec 16 12:54:44.933324 kernel: ACPI: PM: (supports S0 S5) Dec 16 12:54:44.933336 kernel: ACPI: Using IOAPIC for interrupt routing Dec 16 12:54:44.933346 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 16 12:54:44.933355 kernel: PCI: Using E820 reservations for host bridge windows Dec 16 12:54:44.933365 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Dec 16 12:54:44.933374 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 16 12:54:44.933609 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 16 12:54:44.934921 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Dec 16 12:54:44.935107 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Dec 16 12:54:44.935124 kernel: acpiphp: Slot [3] registered Dec 16 12:54:44.935134 kernel: acpiphp: Slot [4] registered Dec 16 12:54:44.935144 kernel: acpiphp: Slot [5] registered Dec 16 12:54:44.935154 kernel: acpiphp: Slot [6] registered Dec 16 12:54:44.935164 kernel: acpiphp: Slot [7] registered Dec 16 12:54:44.935178 kernel: acpiphp: Slot [8] registered Dec 16 12:54:44.935188 kernel: acpiphp: Slot [9] registered Dec 16 12:54:44.935198 kernel: acpiphp: Slot [10] registered Dec 16 12:54:44.935208 kernel: acpiphp: Slot [11] registered Dec 16 12:54:44.935217 kernel: acpiphp: Slot [12] registered Dec 16 12:54:44.935227 kernel: acpiphp: Slot [13] registered Dec 16 12:54:44.935237 kernel: acpiphp: Slot [14] registered Dec 16 12:54:44.935246 kernel: acpiphp: Slot [15] registered Dec 16 12:54:44.935259 kernel: acpiphp: Slot [16] registered Dec 16 12:54:44.935268 kernel: acpiphp: Slot [17] registered Dec 16 12:54:44.935278 kernel: acpiphp: Slot [18] registered Dec 16 12:54:44.935287 kernel: acpiphp: Slot [19] registered Dec 16 12:54:44.935296 kernel: acpiphp: Slot [20] registered Dec 16 12:54:44.935306 kernel: acpiphp: Slot [21] registered Dec 16 12:54:44.935315 kernel: acpiphp: Slot [22] registered Dec 16 12:54:44.935327 kernel: acpiphp: Slot [23] registered Dec 16 12:54:44.935337 kernel: acpiphp: Slot [24] registered Dec 16 12:54:44.935346 kernel: acpiphp: Slot [25] registered Dec 16 12:54:44.935355 kernel: acpiphp: Slot [26] registered Dec 16 12:54:44.935365 kernel: acpiphp: Slot [27] registered Dec 16 12:54:44.935374 kernel: acpiphp: Slot [28] registered Dec 16 12:54:44.935383 kernel: acpiphp: Slot [29] registered Dec 16 12:54:44.935395 kernel: acpiphp: Slot [30] registered Dec 16 12:54:44.935405 kernel: acpiphp: Slot [31] registered Dec 16 12:54:44.935415 kernel: PCI host bridge to bus 0000:00 Dec 16 12:54:44.935558 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 16 12:54:44.935695 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 16 12:54:44.935814 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 16 12:54:44.935934 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Dec 16 12:54:44.936058 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Dec 16 12:54:44.936175 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 16 12:54:44.936338 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Dec 16 12:54:44.936612 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Dec 16 12:54:44.937396 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint Dec 16 12:54:44.937790 kernel: pci 0000:00:01.1: BAR 4 [io 0xc1e0-0xc1ef] Dec 16 12:54:44.937924 kernel: pci 0000:00:01.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Dec 16 12:54:44.939609 kernel: pci 0000:00:01.1: BAR 1 [io 0x03f6]: legacy IDE quirk Dec 16 12:54:44.939794 kernel: pci 0000:00:01.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Dec 16 12:54:44.942857 kernel: pci 0000:00:01.1: BAR 3 [io 0x0376]: legacy IDE quirk Dec 16 12:54:44.943017 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint Dec 16 12:54:44.943152 kernel: pci 0000:00:01.2: BAR 4 [io 0xc180-0xc19f] Dec 16 12:54:44.943289 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Dec 16 12:54:44.943419 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Dec 16 12:54:44.943567 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Dec 16 12:54:44.944302 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Dec 16 12:54:44.944456 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref] Dec 16 12:54:44.944625 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref] Dec 16 12:54:44.944772 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfebf0000-0xfebf0fff] Dec 16 12:54:44.944902 kernel: pci 0000:00:02.0: ROM [mem 0xfebe0000-0xfebeffff pref] Dec 16 12:54:44.945033 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 16 12:54:44.945178 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Dec 16 12:54:44.945310 kernel: pci 0000:00:03.0: BAR 0 [io 0xc1a0-0xc1bf] Dec 16 12:54:44.945440 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebf1000-0xfebf1fff] Dec 16 12:54:44.945577 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref] Dec 16 12:54:44.946802 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Dec 16 12:54:44.946955 kernel: pci 0000:00:04.0: BAR 0 [io 0xc1c0-0xc1df] Dec 16 12:54:44.947099 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebf2000-0xfebf2fff] Dec 16 12:54:44.947280 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref] Dec 16 12:54:44.947480 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Dec 16 12:54:44.947619 kernel: pci 0000:00:05.0: BAR 0 [io 0xc100-0xc13f] Dec 16 12:54:44.948761 kernel: pci 0000:00:05.0: BAR 1 [mem 0xfebf3000-0xfebf3fff] Dec 16 12:54:44.948902 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref] Dec 16 12:54:44.949065 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Dec 16 12:54:44.949198 kernel: pci 0000:00:06.0: BAR 0 [io 0xc000-0xc07f] Dec 16 12:54:44.949329 kernel: pci 0000:00:06.0: BAR 1 [mem 0xfebf4000-0xfebf4fff] Dec 16 12:54:44.949460 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref] Dec 16 12:54:44.949599 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Dec 16 12:54:44.949758 kernel: pci 0000:00:07.0: BAR 0 [io 0xc080-0xc0ff] Dec 16 12:54:44.949889 kernel: pci 0000:00:07.0: BAR 1 [mem 0xfebf5000-0xfebf5fff] Dec 16 12:54:44.950017 kernel: pci 0000:00:07.0: BAR 4 [mem 0xfe814000-0xfe817fff 64bit pref] Dec 16 12:54:44.950165 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint Dec 16 12:54:44.950295 kernel: pci 0000:00:08.0: BAR 0 [io 0xc140-0xc17f] Dec 16 12:54:44.950429 kernel: pci 0000:00:08.0: BAR 4 [mem 0xfe818000-0xfe81bfff 64bit pref] Dec 16 12:54:44.950442 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 16 12:54:44.950452 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 16 12:54:44.950462 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 16 12:54:44.950472 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 16 12:54:44.950482 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 16 12:54:44.950491 kernel: iommu: Default domain type: Translated Dec 16 12:54:44.950505 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 16 12:54:44.950515 kernel: PCI: Using ACPI for IRQ routing Dec 16 12:54:44.950525 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 16 12:54:44.950535 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 16 12:54:44.950545 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Dec 16 12:54:44.950704 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Dec 16 12:54:44.950836 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Dec 16 12:54:44.950969 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 16 12:54:44.950982 kernel: vgaarb: loaded Dec 16 12:54:44.950992 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 16 12:54:44.951002 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 16 12:54:44.951013 kernel: clocksource: Switched to clocksource kvm-clock Dec 16 12:54:44.951027 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 12:54:44.951042 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 12:54:44.951058 kernel: pnp: PnP ACPI init Dec 16 12:54:44.951072 kernel: pnp: PnP ACPI: found 4 devices Dec 16 12:54:44.951086 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 16 12:54:44.951099 kernel: NET: Registered PF_INET protocol family Dec 16 12:54:44.951112 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 16 12:54:44.951125 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 16 12:54:44.951138 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 12:54:44.951156 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 16 12:54:44.951171 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 16 12:54:44.951185 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 16 12:54:44.951201 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 16 12:54:44.951211 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 16 12:54:44.951221 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 12:54:44.951231 kernel: NET: Registered PF_XDP protocol family Dec 16 12:54:44.951380 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 16 12:54:44.951501 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 16 12:54:44.951617 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 16 12:54:44.951760 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Dec 16 12:54:44.951879 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Dec 16 12:54:44.952015 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Dec 16 12:54:44.952175 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 16 12:54:44.952196 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Dec 16 12:54:44.952334 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x720 took 27355 usecs Dec 16 12:54:44.952354 kernel: PCI: CLS 0 bytes, default 64 Dec 16 12:54:44.952367 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 16 12:54:44.952383 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f3ba4d6b9, max_idle_ns: 440795310912 ns Dec 16 12:54:44.952399 kernel: Initialise system trusted keyrings Dec 16 12:54:44.952412 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 16 12:54:44.952430 kernel: Key type asymmetric registered Dec 16 12:54:44.952443 kernel: Asymmetric key parser 'x509' registered Dec 16 12:54:44.952457 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 16 12:54:44.952471 kernel: io scheduler mq-deadline registered Dec 16 12:54:44.952485 kernel: io scheduler kyber registered Dec 16 12:54:44.952499 kernel: io scheduler bfq registered Dec 16 12:54:44.952530 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 16 12:54:44.952550 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Dec 16 12:54:44.952564 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Dec 16 12:54:44.952579 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Dec 16 12:54:44.952593 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 12:54:44.952607 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 16 12:54:44.952623 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 16 12:54:44.953346 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 16 12:54:44.953364 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 16 12:54:44.953538 kernel: rtc_cmos 00:03: RTC can wake from S4 Dec 16 12:54:44.953553 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 16 12:54:44.954066 kernel: rtc_cmos 00:03: registered as rtc0 Dec 16 12:54:44.954212 kernel: rtc_cmos 00:03: setting system clock to 2025-12-16T12:54:43 UTC (1765889683) Dec 16 12:54:44.954340 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Dec 16 12:54:44.954358 kernel: intel_pstate: CPU model not supported Dec 16 12:54:44.954369 kernel: NET: Registered PF_INET6 protocol family Dec 16 12:54:44.954379 kernel: Segment Routing with IPv6 Dec 16 12:54:44.954389 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 12:54:44.954399 kernel: NET: Registered PF_PACKET protocol family Dec 16 12:54:44.954410 kernel: Key type dns_resolver registered Dec 16 12:54:44.954420 kernel: IPI shorthand broadcast: enabled Dec 16 12:54:44.954433 kernel: sched_clock: Marking stable (1865004044, 151940606)->(2045153360, -28208710) Dec 16 12:54:44.954443 kernel: registered taskstats version 1 Dec 16 12:54:44.954452 kernel: Loading compiled-in X.509 certificates Dec 16 12:54:44.954463 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: b90706f42f055ab9f35fc8fc29156d877adb12c4' Dec 16 12:54:44.954472 kernel: Demotion targets for Node 0: null Dec 16 12:54:44.954482 kernel: Key type .fscrypt registered Dec 16 12:54:44.954492 kernel: Key type fscrypt-provisioning registered Dec 16 12:54:44.954519 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 16 12:54:44.954532 kernel: ima: Allocated hash algorithm: sha1 Dec 16 12:54:44.954542 kernel: ima: No architecture policies found Dec 16 12:54:44.954552 kernel: clk: Disabling unused clocks Dec 16 12:54:44.954563 kernel: Freeing unused kernel image (initmem) memory: 15464K Dec 16 12:54:44.954573 kernel: Write protecting the kernel read-only data: 45056k Dec 16 12:54:44.954583 kernel: Freeing unused kernel image (rodata/data gap) memory: 828K Dec 16 12:54:44.954596 kernel: Run /init as init process Dec 16 12:54:44.954606 kernel: with arguments: Dec 16 12:54:44.954616 kernel: /init Dec 16 12:54:44.954626 kernel: with environment: Dec 16 12:54:44.955484 kernel: HOME=/ Dec 16 12:54:44.955502 kernel: TERM=linux Dec 16 12:54:44.955516 kernel: SCSI subsystem initialized Dec 16 12:54:44.955531 kernel: libata version 3.00 loaded. Dec 16 12:54:44.957752 kernel: ata_piix 0000:00:01.1: version 2.13 Dec 16 12:54:44.957937 kernel: scsi host0: ata_piix Dec 16 12:54:44.958082 kernel: scsi host1: ata_piix Dec 16 12:54:44.958097 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 lpm-pol 0 Dec 16 12:54:44.958108 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 lpm-pol 0 Dec 16 12:54:44.958125 kernel: ACPI: bus type USB registered Dec 16 12:54:44.958136 kernel: usbcore: registered new interface driver usbfs Dec 16 12:54:44.958146 kernel: usbcore: registered new interface driver hub Dec 16 12:54:44.958157 kernel: usbcore: registered new device driver usb Dec 16 12:54:44.958296 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Dec 16 12:54:44.958430 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Dec 16 12:54:44.958560 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Dec 16 12:54:44.958723 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Dec 16 12:54:44.958886 kernel: hub 1-0:1.0: USB hub found Dec 16 12:54:44.959028 kernel: hub 1-0:1.0: 2 ports detected Dec 16 12:54:44.959237 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Dec 16 12:54:44.959379 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Dec 16 12:54:44.959393 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 16 12:54:44.959405 kernel: GPT:16515071 != 125829119 Dec 16 12:54:44.959421 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 16 12:54:44.959436 kernel: GPT:16515071 != 125829119 Dec 16 12:54:44.959450 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 16 12:54:44.959473 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 16 12:54:44.960736 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Dec 16 12:54:44.960936 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Dec 16 12:54:44.961143 kernel: virtio_scsi virtio3: 2/0/0 default/read/poll queues Dec 16 12:54:44.961338 kernel: scsi host2: Virtio SCSI HBA Dec 16 12:54:44.961364 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 12:54:44.961376 kernel: device-mapper: uevent: version 1.0.3 Dec 16 12:54:44.961387 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 16 12:54:44.961402 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Dec 16 12:54:44.961418 kernel: raid6: avx2x4 gen() 17812 MB/s Dec 16 12:54:44.961435 kernel: raid6: avx2x2 gen() 17953 MB/s Dec 16 12:54:44.961454 kernel: raid6: avx2x1 gen() 13639 MB/s Dec 16 12:54:44.961467 kernel: raid6: using algorithm avx2x2 gen() 17953 MB/s Dec 16 12:54:44.961478 kernel: raid6: .... xor() 21153 MB/s, rmw enabled Dec 16 12:54:44.963823 kernel: raid6: using avx2x2 recovery algorithm Dec 16 12:54:44.963842 kernel: xor: automatically using best checksumming function avx Dec 16 12:54:44.963858 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 12:54:44.963874 kernel: BTRFS: device fsid ea73a94a-fb20-4d45-8448-4c6f4c422a4f devid 1 transid 35 /dev/mapper/usr (253:0) scanned by mount (162) Dec 16 12:54:44.963892 kernel: BTRFS info (device dm-0): first mount of filesystem ea73a94a-fb20-4d45-8448-4c6f4c422a4f Dec 16 12:54:44.963917 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 16 12:54:44.963931 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 12:54:44.963946 kernel: BTRFS info (device dm-0): enabling free space tree Dec 16 12:54:44.963961 kernel: loop: module loaded Dec 16 12:54:44.963977 kernel: loop0: detected capacity change from 0 to 100136 Dec 16 12:54:44.963992 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 12:54:44.964010 systemd[1]: Successfully made /usr/ read-only. Dec 16 12:54:44.964035 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 12:54:44.964053 systemd[1]: Detected virtualization kvm. Dec 16 12:54:44.964069 systemd[1]: Detected architecture x86-64. Dec 16 12:54:44.964085 systemd[1]: Running in initrd. Dec 16 12:54:44.964100 systemd[1]: No hostname configured, using default hostname. Dec 16 12:54:44.964115 systemd[1]: Hostname set to . Dec 16 12:54:44.964135 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 16 12:54:44.964149 systemd[1]: Queued start job for default target initrd.target. Dec 16 12:54:44.964164 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 16 12:54:44.964182 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:54:44.966697 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:54:44.966714 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 12:54:44.966733 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 12:54:44.966745 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 12:54:44.966756 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 12:54:44.966767 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:54:44.966778 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:54:44.966789 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 16 12:54:44.966803 systemd[1]: Reached target paths.target - Path Units. Dec 16 12:54:44.966814 systemd[1]: Reached target slices.target - Slice Units. Dec 16 12:54:44.966825 systemd[1]: Reached target swap.target - Swaps. Dec 16 12:54:44.966835 systemd[1]: Reached target timers.target - Timer Units. Dec 16 12:54:44.966846 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 12:54:44.966856 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 12:54:44.966867 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 16 12:54:44.966880 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 12:54:44.966891 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 16 12:54:44.966901 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:54:44.966912 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 12:54:44.966922 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:54:44.966934 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 12:54:44.966950 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 12:54:44.966970 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 12:54:44.966984 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 12:54:44.966998 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 12:54:44.967015 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 16 12:54:44.967029 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 12:54:44.967043 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 12:54:44.967061 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 12:54:44.967076 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:54:44.967090 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 12:54:44.967104 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:54:44.967123 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 12:54:44.967137 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 12:54:44.967211 systemd-journald[299]: Collecting audit messages is enabled. Dec 16 12:54:44.967256 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 12:54:44.967275 systemd-journald[299]: Journal started Dec 16 12:54:44.967305 systemd-journald[299]: Runtime Journal (/run/log/journal/6335987ae32d4206b78f47a993f6f753) is 4.8M, max 39.1M, 34.2M free. Dec 16 12:54:44.972157 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 12:54:44.972680 kernel: Bridge firewalling registered Dec 16 12:54:44.973420 systemd-modules-load[300]: Inserted module 'br_netfilter' Dec 16 12:54:44.980935 kernel: audit: type=1130 audit(1765889684.974:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:44.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:44.976229 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 12:54:44.981000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:44.986383 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 12:54:44.987700 kernel: audit: type=1130 audit(1765889684.981:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:44.991198 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 12:54:44.993875 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 12:54:44.994000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:45.001331 kernel: audit: type=1130 audit(1765889684.994:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:45.005854 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 12:54:45.019829 systemd-tmpfiles[316]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 16 12:54:45.079023 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:54:45.079000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:45.085710 kernel: audit: type=1130 audit(1765889685.079:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:45.085057 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:54:45.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:45.089841 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 12:54:45.092221 kernel: audit: type=1130 audit(1765889685.085:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:45.107225 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:54:45.114754 kernel: audit: type=1130 audit(1765889685.107:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:45.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:45.108566 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:54:45.115000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:45.120671 kernel: audit: type=1130 audit(1765889685.115:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:45.120949 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 12:54:45.118000 audit: BPF prog-id=6 op=LOAD Dec 16 12:54:45.124663 kernel: audit: type=1334 audit(1765889685.118:9): prog-id=6 op=LOAD Dec 16 12:54:45.132706 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 12:54:45.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:45.138724 kernel: audit: type=1130 audit(1765889685.133:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:45.140444 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 12:54:45.181506 dracut-cmdline[339]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=4dd8de2ff094d97322e7371b16ddee5fc8348868bcdd9ec7bcd11ea9d3933fee Dec 16 12:54:45.205813 systemd-resolved[332]: Positive Trust Anchors: Dec 16 12:54:45.205834 systemd-resolved[332]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 12:54:45.205838 systemd-resolved[332]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 16 12:54:45.205876 systemd-resolved[332]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 12:54:45.237593 systemd-resolved[332]: Defaulting to hostname 'linux'. Dec 16 12:54:45.239469 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 12:54:45.240690 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:54:45.239000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:45.320714 kernel: Loading iSCSI transport class v2.0-870. Dec 16 12:54:45.338672 kernel: iscsi: registered transport (tcp) Dec 16 12:54:45.365173 kernel: iscsi: registered transport (qla4xxx) Dec 16 12:54:45.365280 kernel: QLogic iSCSI HBA Driver Dec 16 12:54:45.400264 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 12:54:45.439790 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:54:45.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:45.442791 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 12:54:45.503105 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 12:54:45.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:45.510702 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 12:54:45.511983 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 12:54:45.562109 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 12:54:45.562000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:45.562000 audit: BPF prog-id=7 op=LOAD Dec 16 12:54:45.562000 audit: BPF prog-id=8 op=LOAD Dec 16 12:54:45.564999 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:54:45.595130 systemd-udevd[576]: Using default interface naming scheme 'v257'. Dec 16 12:54:45.607370 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:54:45.608000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:45.611710 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 12:54:45.646476 dracut-pre-trigger[642]: rd.md=0: removing MD RAID activation Dec 16 12:54:45.656776 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 12:54:45.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:45.660000 audit: BPF prog-id=9 op=LOAD Dec 16 12:54:45.662870 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 12:54:45.692024 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 12:54:45.692000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:45.695208 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 12:54:45.719559 systemd-networkd[695]: lo: Link UP Dec 16 12:54:45.719568 systemd-networkd[695]: lo: Gained carrier Dec 16 12:54:45.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:45.720513 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 12:54:45.721156 systemd[1]: Reached target network.target - Network. Dec 16 12:54:45.779368 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:54:45.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:45.781803 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 12:54:45.875328 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 16 12:54:45.889289 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 16 12:54:45.927210 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 16 12:54:45.939750 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 16 12:54:45.942567 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 12:54:45.973090 disk-uuid[746]: Primary Header is updated. Dec 16 12:54:45.973090 disk-uuid[746]: Secondary Entries is updated. Dec 16 12:54:45.973090 disk-uuid[746]: Secondary Header is updated. Dec 16 12:54:45.974995 kernel: cryptd: max_cpu_qlen set to 1000 Dec 16 12:54:45.994656 kernel: AES CTR mode by8 optimization enabled Dec 16 12:54:46.057112 systemd-networkd[695]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/yy-digitalocean.network Dec 16 12:54:46.057122 systemd-networkd[695]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Dec 16 12:54:46.058165 systemd-networkd[695]: eth0: Link UP Dec 16 12:54:46.059784 systemd-networkd[695]: eth0: Gained carrier Dec 16 12:54:46.059801 systemd-networkd[695]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/yy-digitalocean.network Dec 16 12:54:46.074671 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Dec 16 12:54:46.076788 systemd-networkd[695]: eth0: DHCPv4 address 164.90.155.252/20, gateway 164.90.144.1 acquired from 169.254.169.253 Dec 16 12:54:46.099319 systemd-networkd[695]: eth1: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 12:54:46.100668 systemd-networkd[695]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 12:54:46.101418 systemd-networkd[695]: eth1: Link UP Dec 16 12:54:46.101616 systemd-networkd[695]: eth1: Gained carrier Dec 16 12:54:46.104159 systemd-networkd[695]: eth1: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 12:54:46.117717 systemd-networkd[695]: eth1: DHCPv4 address 10.124.0.29/20 acquired from 169.254.169.253 Dec 16 12:54:46.136366 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:54:46.137806 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:54:46.140000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:46.141170 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:54:46.142584 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:54:46.183578 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 12:54:46.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:46.186507 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 12:54:46.187931 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:54:46.247506 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 12:54:46.249381 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 12:54:46.251374 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:54:46.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:46.275590 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 12:54:46.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:47.062279 disk-uuid[748]: Warning: The kernel is still using the old partition table. Dec 16 12:54:47.062279 disk-uuid[748]: The new table will be used at the next reboot or after you Dec 16 12:54:47.062279 disk-uuid[748]: run partprobe(8) or kpartx(8) Dec 16 12:54:47.062279 disk-uuid[748]: The operation has completed successfully. Dec 16 12:54:47.073681 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 12:54:47.073871 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 12:54:47.084262 kernel: kauditd_printk_skb: 16 callbacks suppressed Dec 16 12:54:47.084316 kernel: audit: type=1130 audit(1765889687.074:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:47.084334 kernel: audit: type=1131 audit(1765889687.074:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:47.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:47.074000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:47.078902 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 12:54:47.122695 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (841) Dec 16 12:54:47.126115 kernel: BTRFS info (device vda6): first mount of filesystem c87e2a2e-b8fc-4d1d-98f3-593ea9a0f098 Dec 16 12:54:47.126186 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 12:54:47.131781 kernel: BTRFS info (device vda6): turning on async discard Dec 16 12:54:47.131891 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 12:54:47.141679 kernel: BTRFS info (device vda6): last unmount of filesystem c87e2a2e-b8fc-4d1d-98f3-593ea9a0f098 Dec 16 12:54:47.143142 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 12:54:47.147804 kernel: audit: type=1130 audit(1765889687.143:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:47.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:47.146836 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 12:54:47.153832 systemd-networkd[695]: eth0: Gained IPv6LL Dec 16 12:54:47.357392 ignition[860]: Ignition 2.22.0 Dec 16 12:54:47.358451 ignition[860]: Stage: fetch-offline Dec 16 12:54:47.358525 ignition[860]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:54:47.358542 ignition[860]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 16 12:54:47.362795 ignition[860]: parsed url from cmdline: "" Dec 16 12:54:47.362811 ignition[860]: no config URL provided Dec 16 12:54:47.362826 ignition[860]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 12:54:47.362855 ignition[860]: no config at "/usr/lib/ignition/user.ign" Dec 16 12:54:47.362865 ignition[860]: failed to fetch config: resource requires networking Dec 16 12:54:47.366406 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 12:54:47.376414 kernel: audit: type=1130 audit(1765889687.366:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:47.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:47.363117 ignition[860]: Ignition finished successfully Dec 16 12:54:47.370840 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 16 12:54:47.412923 ignition[866]: Ignition 2.22.0 Dec 16 12:54:47.412942 ignition[866]: Stage: fetch Dec 16 12:54:47.413123 ignition[866]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:54:47.413133 ignition[866]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 16 12:54:47.413232 ignition[866]: parsed url from cmdline: "" Dec 16 12:54:47.413236 ignition[866]: no config URL provided Dec 16 12:54:47.413242 ignition[866]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 12:54:47.413249 ignition[866]: no config at "/usr/lib/ignition/user.ign" Dec 16 12:54:47.413278 ignition[866]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Dec 16 12:54:47.438734 ignition[866]: GET result: OK Dec 16 12:54:47.438908 ignition[866]: parsing config with SHA512: 7b967dc8aec0a5c0f85e190a2bbd336353a18cc02c84c2cb77ec20a4e6af8d5b55efe8d0e8db1e0568ea5c860100e021cd62b5000b06e47dab000fe27815c64a Dec 16 12:54:47.445645 unknown[866]: fetched base config from "system" Dec 16 12:54:47.445666 unknown[866]: fetched base config from "system" Dec 16 12:54:47.446142 ignition[866]: fetch: fetch complete Dec 16 12:54:47.445673 unknown[866]: fetched user config from "digitalocean" Dec 16 12:54:47.446148 ignition[866]: fetch: fetch passed Dec 16 12:54:47.446235 ignition[866]: Ignition finished successfully Dec 16 12:54:47.448428 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 16 12:54:47.453423 kernel: audit: type=1130 audit(1765889687.448:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:47.448000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:47.452831 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 12:54:47.494107 ignition[872]: Ignition 2.22.0 Dec 16 12:54:47.494120 ignition[872]: Stage: kargs Dec 16 12:54:47.494330 ignition[872]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:54:47.494341 ignition[872]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 16 12:54:47.495944 ignition[872]: kargs: kargs passed Dec 16 12:54:47.496030 ignition[872]: Ignition finished successfully Dec 16 12:54:47.498916 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 12:54:47.499000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:47.502788 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 12:54:47.504166 kernel: audit: type=1130 audit(1765889687.499:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:47.546507 ignition[878]: Ignition 2.22.0 Dec 16 12:54:47.546529 ignition[878]: Stage: disks Dec 16 12:54:47.546718 ignition[878]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:54:47.546727 ignition[878]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 16 12:54:47.547591 ignition[878]: disks: disks passed Dec 16 12:54:47.547676 ignition[878]: Ignition finished successfully Dec 16 12:54:47.555317 kernel: audit: type=1130 audit(1765889687.550:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:47.550000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:47.550416 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 12:54:47.551709 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 12:54:47.555781 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 12:54:47.556798 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 12:54:47.557619 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 12:54:47.558572 systemd[1]: Reached target basic.target - Basic System. Dec 16 12:54:47.560906 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 12:54:47.604245 systemd-fsck[887]: ROOT: clean, 15/456736 files, 38230/456704 blocks Dec 16 12:54:47.607112 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 12:54:47.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:47.611774 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 12:54:47.613257 kernel: audit: type=1130 audit(1765889687.607:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:47.745658 kernel: EXT4-fs (vda9): mounted filesystem 7cac6192-738c-43cc-9341-24f71d091e91 r/w with ordered data mode. Quota mode: none. Dec 16 12:54:47.746627 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 12:54:47.748618 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 12:54:47.751431 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 12:54:47.754742 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 12:54:47.759116 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... Dec 16 12:54:47.764997 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 16 12:54:47.768564 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 12:54:47.769735 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 12:54:47.775663 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (895) Dec 16 12:54:47.778116 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 12:54:47.784067 kernel: BTRFS info (device vda6): first mount of filesystem c87e2a2e-b8fc-4d1d-98f3-593ea9a0f098 Dec 16 12:54:47.784096 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 12:54:47.795054 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 12:54:47.802790 kernel: BTRFS info (device vda6): turning on async discard Dec 16 12:54:47.802840 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 12:54:47.809123 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 12:54:47.871804 coreos-metadata[897]: Dec 16 12:54:47.867 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Dec 16 12:54:47.876368 coreos-metadata[898]: Dec 16 12:54:47.876 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Dec 16 12:54:47.879915 coreos-metadata[897]: Dec 16 12:54:47.879 INFO Fetch successful Dec 16 12:54:47.887405 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. Dec 16 12:54:47.887530 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. Dec 16 12:54:47.891129 coreos-metadata[898]: Dec 16 12:54:47.888 INFO Fetch successful Dec 16 12:54:47.903645 kernel: audit: type=1130 audit(1765889687.891:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-afterburn-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:47.903686 kernel: audit: type=1131 audit(1765889687.891:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-afterburn-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:47.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-afterburn-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:47.891000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-afterburn-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:47.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:47.898194 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 16 12:54:47.904859 coreos-metadata[898]: Dec 16 12:54:47.896 INFO wrote hostname ci-4515.1.0-3-ef2be4b8ba to /sysroot/etc/hostname Dec 16 12:54:47.906837 initrd-setup-root[927]: cut: /sysroot/etc/passwd: No such file or directory Dec 16 12:54:47.912571 initrd-setup-root[934]: cut: /sysroot/etc/group: No such file or directory Dec 16 12:54:47.918072 initrd-setup-root[941]: cut: /sysroot/etc/shadow: No such file or directory Dec 16 12:54:47.924733 initrd-setup-root[948]: cut: /sysroot/etc/gshadow: No such file or directory Dec 16 12:54:47.984821 systemd-networkd[695]: eth1: Gained IPv6LL Dec 16 12:54:48.037110 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 12:54:48.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:48.039323 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 12:54:48.040727 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 12:54:48.065694 kernel: BTRFS info (device vda6): last unmount of filesystem c87e2a2e-b8fc-4d1d-98f3-593ea9a0f098 Dec 16 12:54:48.085162 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 12:54:48.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:48.108579 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 12:54:48.111683 ignition[1016]: INFO : Ignition 2.22.0 Dec 16 12:54:48.112499 ignition[1016]: INFO : Stage: mount Dec 16 12:54:48.113321 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:54:48.114711 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 16 12:54:48.115824 ignition[1016]: INFO : mount: mount passed Dec 16 12:54:48.116398 ignition[1016]: INFO : Ignition finished successfully Dec 16 12:54:48.118575 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 12:54:48.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:48.121064 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 12:54:48.153265 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 12:54:48.186704 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1027) Dec 16 12:54:48.190363 kernel: BTRFS info (device vda6): first mount of filesystem c87e2a2e-b8fc-4d1d-98f3-593ea9a0f098 Dec 16 12:54:48.190583 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 12:54:48.195349 kernel: BTRFS info (device vda6): turning on async discard Dec 16 12:54:48.195431 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 12:54:48.197765 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 12:54:48.255100 ignition[1044]: INFO : Ignition 2.22.0 Dec 16 12:54:48.256245 ignition[1044]: INFO : Stage: files Dec 16 12:54:48.256245 ignition[1044]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:54:48.256245 ignition[1044]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 16 12:54:48.258000 ignition[1044]: DEBUG : files: compiled without relabeling support, skipping Dec 16 12:54:48.258825 ignition[1044]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 12:54:48.258825 ignition[1044]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 12:54:48.262441 ignition[1044]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 12:54:48.263377 ignition[1044]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 12:54:48.264241 unknown[1044]: wrote ssh authorized keys file for user: core Dec 16 12:54:48.265459 ignition[1044]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 12:54:48.266768 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 16 12:54:48.267645 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Dec 16 12:54:48.306984 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 16 12:54:48.385537 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 16 12:54:48.386754 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 16 12:54:48.386754 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 12:54:48.386754 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 16 12:54:48.386754 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 16 12:54:48.386754 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 12:54:48.386754 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 12:54:48.386754 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 12:54:48.386754 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 12:54:48.394927 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 12:54:48.394927 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 12:54:48.394927 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 16 12:54:48.394927 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 16 12:54:48.394927 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 16 12:54:48.394927 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Dec 16 12:54:48.688936 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 16 12:54:49.068970 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 16 12:54:49.068970 ignition[1044]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 16 12:54:49.071482 ignition[1044]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 12:54:49.072839 ignition[1044]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 12:54:49.072839 ignition[1044]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 16 12:54:49.074138 ignition[1044]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 16 12:54:49.074138 ignition[1044]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 16 12:54:49.074138 ignition[1044]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 12:54:49.074138 ignition[1044]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 12:54:49.074138 ignition[1044]: INFO : files: files passed Dec 16 12:54:49.074138 ignition[1044]: INFO : Ignition finished successfully Dec 16 12:54:49.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:49.075109 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 12:54:49.077375 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 12:54:49.081033 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 12:54:49.091868 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 12:54:49.093258 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 12:54:49.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:49.093000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:49.106455 initrd-setup-root-after-ignition[1080]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:54:49.107395 initrd-setup-root-after-ignition[1076]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:54:49.107395 initrd-setup-root-after-ignition[1076]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:54:49.109489 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 12:54:49.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:49.110912 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 12:54:49.114878 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 12:54:49.190509 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 12:54:49.190000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:49.190000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:49.190718 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 12:54:49.191873 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 12:54:49.193264 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 12:54:49.194541 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 12:54:49.196164 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 12:54:49.231051 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 12:54:49.231000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:49.233862 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 12:54:49.260730 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 16 12:54:49.260921 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:54:49.262947 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:54:49.263965 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 12:54:49.264790 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 12:54:49.264943 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 12:54:49.265622 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 12:54:49.264000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:49.267814 systemd[1]: Stopped target basic.target - Basic System. Dec 16 12:54:49.268879 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 12:54:49.269845 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 12:54:49.270709 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 12:54:49.271774 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 16 12:54:49.272787 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 12:54:49.273945 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 12:54:49.275122 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 12:54:49.276139 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 12:54:49.281559 systemd[1]: Stopped target swap.target - Swaps. Dec 16 12:54:49.282244 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 12:54:49.282000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:49.282430 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 12:54:49.283288 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:54:49.284361 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:54:49.285096 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 12:54:49.286000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:49.285259 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:54:49.286104 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 12:54:49.288000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:49.286312 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 12:54:49.289000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:49.287332 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 12:54:49.290000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:49.287511 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 12:54:49.288824 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 12:54:49.288935 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 12:54:49.290058 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 16 12:54:49.290287 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 16 12:54:49.293762 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 12:54:49.295941 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 12:54:49.297729 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 12:54:49.297948 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:54:49.300000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:49.300945 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 12:54:49.301125 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:54:49.301000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:49.302204 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 12:54:49.302000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:49.302360 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 12:54:49.310603 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 12:54:49.312569 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 12:54:49.315000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:49.315000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:49.334174 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 12:54:49.340265 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 12:54:49.340935 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 12:54:49.341000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:49.345775 ignition[1100]: INFO : Ignition 2.22.0 Dec 16 12:54:49.345775 ignition[1100]: INFO : Stage: umount Dec 16 12:54:49.346948 ignition[1100]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:54:49.346948 ignition[1100]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 16 12:54:49.349232 ignition[1100]: INFO : umount: umount passed Dec 16 12:54:49.349764 ignition[1100]: INFO : Ignition finished successfully Dec 16 12:54:49.351677 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 12:54:49.352000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:49.352559 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 12:54:49.354129 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 12:54:49.354000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:49.354188 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 12:54:49.356002 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 12:54:49.356000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:49.356079 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 12:54:49.357000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:49.356930 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 16 12:54:49.356992 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 16 12:54:49.359000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:49.358310 systemd[1]: Stopped target network.target - Network. Dec 16 12:54:49.359351 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 12:54:49.359431 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 12:54:49.360002 systemd[1]: Stopped target paths.target - Path Units. Dec 16 12:54:49.360981 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 12:54:49.361036 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:54:49.361783 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 12:54:49.362751 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 12:54:49.366000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:49.363900 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 12:54:49.367000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:49.364000 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 12:54:49.368000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:49.364756 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 12:54:49.364798 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 12:54:49.365601 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Dec 16 12:54:49.365639 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Dec 16 12:54:49.366602 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 12:54:49.366702 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 12:54:49.367507 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 12:54:49.367556 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 12:54:49.368533 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 12:54:49.368599 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 12:54:49.369500 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 12:54:49.370336 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 12:54:49.379848 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 12:54:49.380001 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 12:54:49.380000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:49.383000 audit: BPF prog-id=6 op=UNLOAD Dec 16 12:54:49.384313 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 12:54:49.384428 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 12:54:49.384000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:49.388014 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 16 12:54:49.388000 audit: BPF prog-id=9 op=UNLOAD Dec 16 12:54:49.388853 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 12:54:49.388901 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:54:49.390741 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 12:54:49.391906 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 12:54:49.394000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:49.391976 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 12:54:49.395000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:49.395050 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 12:54:49.396000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:49.395120 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:54:49.395972 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 12:54:49.396023 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 12:54:49.397064 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:54:49.412520 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 12:54:49.412000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:49.414000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:49.415000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:49.412733 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:54:49.417000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:49.413592 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 12:54:49.413656 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 12:54:49.414180 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 12:54:49.414214 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:54:49.414735 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 12:54:49.428000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:49.414797 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 12:54:49.415379 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 12:54:49.431000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:49.431000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:49.415444 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 12:54:49.416965 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 12:54:49.434000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:49.434000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:49.417025 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 12:54:49.427792 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 12:54:49.428623 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 16 12:54:49.428770 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:54:49.430401 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 12:54:49.430493 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:54:49.431909 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 16 12:54:49.431971 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 12:54:49.433640 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 12:54:49.433700 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:54:49.434787 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:54:49.434866 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:54:49.448595 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 12:54:49.448000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:49.448000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:49.448787 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 12:54:49.460870 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 12:54:49.461003 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 12:54:49.461000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:49.463222 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 12:54:49.465213 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 12:54:49.488581 systemd[1]: Switching root. Dec 16 12:54:49.526605 systemd-journald[299]: Journal stopped Dec 16 12:54:50.852100 systemd-journald[299]: Received SIGTERM from PID 1 (systemd). Dec 16 12:54:50.852182 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 12:54:50.852231 kernel: SELinux: policy capability open_perms=1 Dec 16 12:54:50.852253 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 12:54:50.852270 kernel: SELinux: policy capability always_check_network=0 Dec 16 12:54:50.852282 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 12:54:50.852299 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 12:54:50.852311 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 12:54:50.852325 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 12:54:50.852337 kernel: SELinux: policy capability userspace_initial_context=0 Dec 16 12:54:50.852359 systemd[1]: Successfully loaded SELinux policy in 76.110ms. Dec 16 12:54:50.852376 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.737ms. Dec 16 12:54:50.852395 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 12:54:50.852429 systemd[1]: Detected virtualization kvm. Dec 16 12:54:50.852449 systemd[1]: Detected architecture x86-64. Dec 16 12:54:50.852468 systemd[1]: Detected first boot. Dec 16 12:54:50.852481 systemd[1]: Hostname set to . Dec 16 12:54:50.852495 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 16 12:54:50.852509 zram_generator::config[1144]: No configuration found. Dec 16 12:54:50.852527 kernel: Guest personality initialized and is inactive Dec 16 12:54:50.852539 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Dec 16 12:54:50.852552 kernel: Initialized host personality Dec 16 12:54:50.852568 kernel: NET: Registered PF_VSOCK protocol family Dec 16 12:54:50.852581 systemd[1]: Populated /etc with preset unit settings. Dec 16 12:54:50.852595 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 12:54:50.852609 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 12:54:50.852625 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 12:54:50.855703 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 12:54:50.855727 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 12:54:50.855742 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 12:54:50.855755 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 12:54:50.855770 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 12:54:50.855784 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 12:54:50.855803 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 12:54:50.855818 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 12:54:50.855831 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:54:50.855845 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:54:50.855858 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 12:54:50.855872 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 12:54:50.855885 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 12:54:50.855901 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 12:54:50.855914 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 16 12:54:50.855928 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:54:50.855941 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:54:50.855954 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 12:54:50.855969 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 12:54:50.855984 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 12:54:50.855997 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 12:54:50.856012 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:54:50.856025 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 12:54:50.856038 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Dec 16 12:54:50.856051 systemd[1]: Reached target slices.target - Slice Units. Dec 16 12:54:50.856068 systemd[1]: Reached target swap.target - Swaps. Dec 16 12:54:50.856081 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 12:54:50.856094 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 12:54:50.856108 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 16 12:54:50.856122 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 16 12:54:50.856135 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Dec 16 12:54:50.856148 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:54:50.856163 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Dec 16 12:54:50.856180 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Dec 16 12:54:50.856194 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 12:54:50.856206 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:54:50.856219 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 12:54:50.856232 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 12:54:50.856246 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 12:54:50.856259 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 12:54:50.856274 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 12:54:50.856287 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 12:54:50.856301 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 12:54:50.856315 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 12:54:50.856329 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 12:54:50.856342 systemd[1]: Reached target machines.target - Containers. Dec 16 12:54:50.856358 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 12:54:50.856372 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:54:50.856386 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 12:54:50.856399 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 12:54:50.856430 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:54:50.856450 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 12:54:50.856470 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:54:50.856487 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 12:54:50.856500 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:54:50.856515 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 12:54:50.856528 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 12:54:50.856541 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 12:54:50.856554 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 12:54:50.856568 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 12:54:50.856585 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:54:50.856598 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 12:54:50.856611 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 12:54:50.856625 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 12:54:50.856650 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 12:54:50.856664 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 16 12:54:50.856683 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 12:54:50.856702 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 12:54:50.856716 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 12:54:50.856730 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 12:54:50.856744 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 12:54:50.856757 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 12:54:50.856773 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 12:54:50.856787 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 12:54:50.856800 kernel: ACPI: bus type drm_connector registered Dec 16 12:54:50.856814 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:54:50.856828 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 12:54:50.856841 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 12:54:50.856854 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:54:50.856870 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:54:50.856883 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 12:54:50.856897 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 12:54:50.856910 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:54:50.856923 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:54:50.856937 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 12:54:50.856950 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:54:50.856967 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:54:50.856980 kernel: fuse: init (API version 7.41) Dec 16 12:54:50.856993 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 12:54:50.857007 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 12:54:50.857020 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 12:54:50.857036 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 12:54:50.857051 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Dec 16 12:54:50.857065 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 12:54:50.857081 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 12:54:50.857095 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 12:54:50.857109 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 12:54:50.857122 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 16 12:54:50.857136 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:54:50.857150 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 12:54:50.857167 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 12:54:50.857182 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 12:54:50.857234 systemd-journald[1220]: Collecting audit messages is enabled. Dec 16 12:54:50.857262 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 12:54:50.857276 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 12:54:50.857291 systemd-journald[1220]: Journal started Dec 16 12:54:50.857319 systemd-journald[1220]: Runtime Journal (/run/log/journal/6335987ae32d4206b78f47a993f6f753) is 4.8M, max 39.1M, 34.2M free. Dec 16 12:54:50.662000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:50.665000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:50.671000 audit: BPF prog-id=14 op=UNLOAD Dec 16 12:54:50.671000 audit: BPF prog-id=13 op=UNLOAD Dec 16 12:54:50.672000 audit: BPF prog-id=15 op=LOAD Dec 16 12:54:50.679000 audit: BPF prog-id=16 op=LOAD Dec 16 12:54:50.680000 audit: BPF prog-id=17 op=LOAD Dec 16 12:54:50.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:50.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:50.775000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:50.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:50.780000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:50.786000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:50.786000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:50.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:50.792000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:50.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:50.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:50.800000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:50.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:50.807000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:50.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:50.811000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:50.847000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 16 12:54:50.847000 audit[1220]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffcab1f1df0 a2=4000 a3=0 items=0 ppid=1 pid=1220 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:54:50.847000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 16 12:54:50.460509 systemd[1]: Queued start job for default target multi-user.target. Dec 16 12:54:50.473773 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 16 12:54:50.474336 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 12:54:50.870656 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 12:54:50.873762 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 12:54:50.878724 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 12:54:50.881666 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 12:54:50.881000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:50.888000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:50.889000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:50.888917 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:54:50.889881 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 16 12:54:50.891201 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 12:54:50.891999 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 12:54:50.899000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:50.899352 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 12:54:50.916075 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 12:54:50.918097 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 12:54:50.921338 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 12:54:50.925128 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 16 12:54:50.930121 kernel: loop1: detected capacity change from 0 to 111544 Dec 16 12:54:50.931717 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:54:50.931000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:50.949729 systemd-journald[1220]: Time spent on flushing to /var/log/journal/6335987ae32d4206b78f47a993f6f753 is 50.118ms for 1144 entries. Dec 16 12:54:50.949729 systemd-journald[1220]: System Journal (/var/log/journal/6335987ae32d4206b78f47a993f6f753) is 8M, max 163.5M, 155.5M free. Dec 16 12:54:51.011562 systemd-journald[1220]: Received client request to flush runtime journal. Dec 16 12:54:51.011645 kernel: loop2: detected capacity change from 0 to 119256 Dec 16 12:54:50.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:50.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:51.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:50.976904 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. Dec 16 12:54:50.976919 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. Dec 16 12:54:50.985681 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 12:54:50.993828 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 12:54:50.994763 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 16 12:54:51.008081 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:54:51.013934 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 12:54:51.013000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:51.025798 kernel: loop3: detected capacity change from 0 to 229808 Dec 16 12:54:51.049678 kernel: loop4: detected capacity change from 0 to 8 Dec 16 12:54:51.049543 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 12:54:51.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:51.051000 audit: BPF prog-id=18 op=LOAD Dec 16 12:54:51.051000 audit: BPF prog-id=19 op=LOAD Dec 16 12:54:51.051000 audit: BPF prog-id=20 op=LOAD Dec 16 12:54:51.053857 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Dec 16 12:54:51.056000 audit: BPF prog-id=21 op=LOAD Dec 16 12:54:51.058887 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 12:54:51.062967 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 12:54:51.074666 kernel: loop5: detected capacity change from 0 to 111544 Dec 16 12:54:51.081000 audit: BPF prog-id=22 op=LOAD Dec 16 12:54:51.082000 audit: BPF prog-id=23 op=LOAD Dec 16 12:54:51.082000 audit: BPF prog-id=24 op=LOAD Dec 16 12:54:51.084953 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Dec 16 12:54:51.085000 audit: BPF prog-id=25 op=LOAD Dec 16 12:54:51.086000 audit: BPF prog-id=26 op=LOAD Dec 16 12:54:51.086000 audit: BPF prog-id=27 op=LOAD Dec 16 12:54:51.089821 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 12:54:51.102659 kernel: loop6: detected capacity change from 0 to 119256 Dec 16 12:54:51.123689 kernel: loop7: detected capacity change from 0 to 229808 Dec 16 12:54:51.137252 systemd-tmpfiles[1295]: ACLs are not supported, ignoring. Dec 16 12:54:51.137280 systemd-tmpfiles[1295]: ACLs are not supported, ignoring. Dec 16 12:54:51.149663 kernel: loop1: detected capacity change from 0 to 8 Dec 16 12:54:51.151244 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:54:51.151000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:51.154188 (sd-merge)[1296]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-digitalocean.raw'. Dec 16 12:54:51.167691 (sd-merge)[1296]: Merged extensions into '/usr'. Dec 16 12:54:51.175300 systemd[1]: Reload requested from client PID 1252 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 12:54:51.175321 systemd[1]: Reloading... Dec 16 12:54:51.187128 systemd-nsresourced[1297]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Dec 16 12:54:51.353658 zram_generator::config[1340]: No configuration found. Dec 16 12:54:51.457490 systemd-oomd[1293]: No swap; memory pressure usage will be degraded Dec 16 12:54:51.486280 systemd-resolved[1294]: Positive Trust Anchors: Dec 16 12:54:51.486306 systemd-resolved[1294]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 12:54:51.486310 systemd-resolved[1294]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 16 12:54:51.486349 systemd-resolved[1294]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 12:54:51.514985 systemd-resolved[1294]: Using system hostname 'ci-4515.1.0-3-ef2be4b8ba'. Dec 16 12:54:51.674723 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 12:54:51.675021 systemd[1]: Reloading finished in 499 ms. Dec 16 12:54:51.704358 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Dec 16 12:54:51.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:51.705284 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 12:54:51.705000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:51.706170 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Dec 16 12:54:51.706000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:51.706908 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 12:54:51.706000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:51.707761 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 12:54:51.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:51.712988 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:54:51.724909 systemd[1]: Starting ensure-sysext.service... Dec 16 12:54:51.727917 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 12:54:51.730000 audit: BPF prog-id=28 op=LOAD Dec 16 12:54:51.731000 audit: BPF prog-id=15 op=UNLOAD Dec 16 12:54:51.731000 audit: BPF prog-id=29 op=LOAD Dec 16 12:54:51.731000 audit: BPF prog-id=30 op=LOAD Dec 16 12:54:51.731000 audit: BPF prog-id=16 op=UNLOAD Dec 16 12:54:51.731000 audit: BPF prog-id=17 op=UNLOAD Dec 16 12:54:51.733000 audit: BPF prog-id=31 op=LOAD Dec 16 12:54:51.733000 audit: BPF prog-id=25 op=UNLOAD Dec 16 12:54:51.733000 audit: BPF prog-id=32 op=LOAD Dec 16 12:54:51.734000 audit: BPF prog-id=33 op=LOAD Dec 16 12:54:51.734000 audit: BPF prog-id=26 op=UNLOAD Dec 16 12:54:51.734000 audit: BPF prog-id=27 op=UNLOAD Dec 16 12:54:51.734000 audit: BPF prog-id=34 op=LOAD Dec 16 12:54:51.735000 audit: BPF prog-id=18 op=UNLOAD Dec 16 12:54:51.735000 audit: BPF prog-id=35 op=LOAD Dec 16 12:54:51.736000 audit: BPF prog-id=36 op=LOAD Dec 16 12:54:51.736000 audit: BPF prog-id=19 op=UNLOAD Dec 16 12:54:51.736000 audit: BPF prog-id=20 op=UNLOAD Dec 16 12:54:51.739000 audit: BPF prog-id=37 op=LOAD Dec 16 12:54:51.739000 audit: BPF prog-id=22 op=UNLOAD Dec 16 12:54:51.739000 audit: BPF prog-id=38 op=LOAD Dec 16 12:54:51.739000 audit: BPF prog-id=39 op=LOAD Dec 16 12:54:51.739000 audit: BPF prog-id=23 op=UNLOAD Dec 16 12:54:51.739000 audit: BPF prog-id=24 op=UNLOAD Dec 16 12:54:51.740000 audit: BPF prog-id=40 op=LOAD Dec 16 12:54:51.740000 audit: BPF prog-id=21 op=UNLOAD Dec 16 12:54:51.783471 systemd[1]: Reload requested from client PID 1384 ('systemctl') (unit ensure-sysext.service)... Dec 16 12:54:51.783497 systemd[1]: Reloading... Dec 16 12:54:51.806680 systemd-tmpfiles[1385]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 16 12:54:51.807778 systemd-tmpfiles[1385]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 16 12:54:51.810051 systemd-tmpfiles[1385]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 12:54:51.812945 systemd-tmpfiles[1385]: ACLs are not supported, ignoring. Dec 16 12:54:51.813155 systemd-tmpfiles[1385]: ACLs are not supported, ignoring. Dec 16 12:54:51.825312 systemd-tmpfiles[1385]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 12:54:51.825325 systemd-tmpfiles[1385]: Skipping /boot Dec 16 12:54:51.837240 systemd-tmpfiles[1385]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 12:54:51.837386 systemd-tmpfiles[1385]: Skipping /boot Dec 16 12:54:51.945659 zram_generator::config[1435]: No configuration found. Dec 16 12:54:52.139576 systemd[1]: Reloading finished in 355 ms. Dec 16 12:54:52.152351 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 12:54:52.152000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:52.153821 kernel: kauditd_printk_skb: 145 callbacks suppressed Dec 16 12:54:52.153901 kernel: audit: type=1130 audit(1765889692.152:180): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:52.156000 audit: BPF prog-id=41 op=LOAD Dec 16 12:54:52.158741 kernel: audit: type=1334 audit(1765889692.156:181): prog-id=41 op=LOAD Dec 16 12:54:52.156000 audit: BPF prog-id=34 op=UNLOAD Dec 16 12:54:52.160690 kernel: audit: type=1334 audit(1765889692.156:182): prog-id=34 op=UNLOAD Dec 16 12:54:52.160756 kernel: audit: type=1334 audit(1765889692.156:183): prog-id=42 op=LOAD Dec 16 12:54:52.156000 audit: BPF prog-id=42 op=LOAD Dec 16 12:54:52.161773 kernel: audit: type=1334 audit(1765889692.156:184): prog-id=43 op=LOAD Dec 16 12:54:52.156000 audit: BPF prog-id=43 op=LOAD Dec 16 12:54:52.162821 kernel: audit: type=1334 audit(1765889692.156:185): prog-id=35 op=UNLOAD Dec 16 12:54:52.156000 audit: BPF prog-id=35 op=UNLOAD Dec 16 12:54:52.163821 kernel: audit: type=1334 audit(1765889692.156:186): prog-id=36 op=UNLOAD Dec 16 12:54:52.156000 audit: BPF prog-id=36 op=UNLOAD Dec 16 12:54:52.164802 kernel: audit: type=1334 audit(1765889692.158:187): prog-id=44 op=LOAD Dec 16 12:54:52.158000 audit: BPF prog-id=44 op=LOAD Dec 16 12:54:52.165807 kernel: audit: type=1334 audit(1765889692.158:188): prog-id=28 op=UNLOAD Dec 16 12:54:52.158000 audit: BPF prog-id=28 op=UNLOAD Dec 16 12:54:52.158000 audit: BPF prog-id=45 op=LOAD Dec 16 12:54:52.158000 audit: BPF prog-id=46 op=LOAD Dec 16 12:54:52.158000 audit: BPF prog-id=29 op=UNLOAD Dec 16 12:54:52.158000 audit: BPF prog-id=30 op=UNLOAD Dec 16 12:54:52.158000 audit: BPF prog-id=47 op=LOAD Dec 16 12:54:52.158000 audit: BPF prog-id=31 op=UNLOAD Dec 16 12:54:52.160000 audit: BPF prog-id=48 op=LOAD Dec 16 12:54:52.161000 audit: BPF prog-id=49 op=LOAD Dec 16 12:54:52.161000 audit: BPF prog-id=32 op=UNLOAD Dec 16 12:54:52.161000 audit: BPF prog-id=33 op=UNLOAD Dec 16 12:54:52.162000 audit: BPF prog-id=50 op=LOAD Dec 16 12:54:52.162000 audit: BPF prog-id=40 op=UNLOAD Dec 16 12:54:52.167197 kernel: audit: type=1334 audit(1765889692.158:189): prog-id=45 op=LOAD Dec 16 12:54:52.165000 audit: BPF prog-id=51 op=LOAD Dec 16 12:54:52.165000 audit: BPF prog-id=37 op=UNLOAD Dec 16 12:54:52.165000 audit: BPF prog-id=52 op=LOAD Dec 16 12:54:52.165000 audit: BPF prog-id=53 op=LOAD Dec 16 12:54:52.165000 audit: BPF prog-id=38 op=UNLOAD Dec 16 12:54:52.165000 audit: BPF prog-id=39 op=UNLOAD Dec 16 12:54:52.171411 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:54:52.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:52.182686 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 12:54:52.188015 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 12:54:52.200894 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 12:54:52.204686 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 12:54:52.205000 audit: BPF prog-id=8 op=UNLOAD Dec 16 12:54:52.205000 audit: BPF prog-id=7 op=UNLOAD Dec 16 12:54:52.209000 audit: BPF prog-id=54 op=LOAD Dec 16 12:54:52.209000 audit: BPF prog-id=55 op=LOAD Dec 16 12:54:52.212759 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:54:52.229073 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 12:54:52.235174 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 12:54:52.235626 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:54:52.239097 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:54:52.244074 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:54:52.250508 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:54:52.251616 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:54:52.252332 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 12:54:52.252736 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:54:52.252970 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 12:54:52.257316 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 12:54:52.259400 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:54:52.259601 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:54:52.259813 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 12:54:52.259930 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:54:52.260044 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 12:54:52.271401 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 12:54:52.272753 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:54:52.283748 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 12:54:52.284810 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:54:52.285039 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 12:54:52.285135 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:54:52.285288 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 12:54:52.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:52.300000 audit: BPF prog-id=56 op=LOAD Dec 16 12:54:52.297934 systemd[1]: Finished ensure-sysext.service. Dec 16 12:54:52.308664 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 16 12:54:52.316000 audit[1473]: SYSTEM_BOOT pid=1473 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 16 12:54:52.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:52.326031 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 12:54:52.360000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:52.361000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:52.358211 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:54:52.360178 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:54:52.363150 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 12:54:52.364000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:52.366000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:52.364755 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 12:54:52.365016 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 12:54:52.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:52.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:52.369000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:52.367341 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 12:54:52.369165 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:54:52.369426 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:54:52.370359 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:54:52.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:52.371424 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:54:52.372000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:52.377196 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 12:54:52.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:52.399538 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 12:54:52.402126 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 12:54:52.409958 systemd-udevd[1472]: Using default interface naming scheme 'v257'. Dec 16 12:54:52.419000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 16 12:54:52.419000 audit[1502]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffda0d68d50 a2=420 a3=0 items=0 ppid=1462 pid=1502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:54:52.419000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 12:54:52.420904 augenrules[1502]: No rules Dec 16 12:54:52.425284 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 12:54:52.425915 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 12:54:52.453550 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:54:52.459813 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 12:54:52.518257 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 16 12:54:52.519705 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 12:54:52.590422 systemd-networkd[1510]: lo: Link UP Dec 16 12:54:52.590787 systemd-networkd[1510]: lo: Gained carrier Dec 16 12:54:52.593903 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 12:54:52.595143 systemd[1]: Reached target network.target - Network. Dec 16 12:54:52.598891 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 16 12:54:52.602859 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 12:54:52.684611 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 16 12:54:52.694269 systemd[1]: Condition check resulted in dev-disk-by\x2dlabel-config\x2d2.device - /dev/disk/by-label/config-2 being skipped. Dec 16 12:54:52.697307 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Dec 16 12:54:52.699613 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 12:54:52.699787 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:54:52.700985 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:54:52.705927 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:54:52.711681 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:54:52.712303 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:54:52.712447 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 12:54:52.712480 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:54:52.712513 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 12:54:52.712528 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 12:54:52.712766 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 16 12:54:52.773137 systemd-networkd[1510]: eth0: Configuring with /run/systemd/network/10-c6:06:91:77:70:33.network. Dec 16 12:54:52.774883 systemd-networkd[1510]: eth0: Link UP Dec 16 12:54:52.775058 systemd-networkd[1510]: eth0: Gained carrier Dec 16 12:54:52.779660 kernel: ISO 9660 Extensions: RRIP_1991A Dec 16 12:54:52.783161 systemd-timesyncd[1481]: Network configuration changed, trying to establish connection. Dec 16 12:54:52.783925 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Dec 16 12:54:52.804219 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:54:52.804589 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:54:52.824941 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:54:52.825646 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:54:52.828035 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:54:52.829122 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:54:52.833068 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 12:54:52.833135 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 12:54:52.845603 systemd-networkd[1510]: eth1: Configuring with /run/systemd/network/10-da:1b:a9:31:cb:1d.network. Dec 16 12:54:52.847280 systemd-networkd[1510]: eth1: Link UP Dec 16 12:54:52.847556 systemd-timesyncd[1481]: Network configuration changed, trying to establish connection. Dec 16 12:54:52.848902 systemd-networkd[1510]: eth1: Gained carrier Dec 16 12:54:52.853115 systemd-timesyncd[1481]: Network configuration changed, trying to establish connection. Dec 16 12:54:52.853695 systemd-timesyncd[1481]: Network configuration changed, trying to establish connection. Dec 16 12:54:52.888171 kernel: mousedev: PS/2 mouse device common for all mice Dec 16 12:54:52.951408 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Dec 16 12:54:52.950622 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 16 12:54:52.957293 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 16 12:54:52.958171 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 12:54:52.970658 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 16 12:54:52.971494 ldconfig[1464]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 12:54:52.978530 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 12:54:52.988260 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 12:54:52.992755 kernel: ACPI: button: Power Button [PWRF] Dec 16 12:54:53.016842 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 12:54:53.033437 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 12:54:53.034865 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 12:54:53.036508 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 12:54:53.038160 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 12:54:53.039615 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Dec 16 12:54:53.041768 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 12:54:53.042978 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 12:54:53.044812 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Dec 16 12:54:53.046019 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Dec 16 12:54:53.047337 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 12:54:53.048573 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 12:54:53.048626 systemd[1]: Reached target paths.target - Path Units. Dec 16 12:54:53.049485 systemd[1]: Reached target timers.target - Timer Units. Dec 16 12:54:53.052725 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 12:54:53.059912 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 12:54:53.068930 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 16 12:54:53.071197 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 16 12:54:53.072933 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 16 12:54:53.084762 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 12:54:53.087380 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 16 12:54:53.090286 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 12:54:53.095730 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 12:54:53.097826 systemd[1]: Reached target basic.target - Basic System. Dec 16 12:54:53.098579 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 12:54:53.098625 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 12:54:53.101903 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 12:54:53.108988 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 16 12:54:53.115959 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 12:54:53.121027 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 12:54:53.128606 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 12:54:53.135072 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 12:54:53.136784 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 12:54:53.146764 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Dec 16 12:54:53.142900 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Dec 16 12:54:53.149547 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 12:54:53.154256 jq[1576]: false Dec 16 12:54:53.171002 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 16 12:54:53.181781 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 12:54:53.197775 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 12:54:53.209516 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 12:54:53.210805 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 16 12:54:53.211971 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 12:54:53.212349 google_oslogin_nss_cache[1578]: oslogin_cache_refresh[1578]: Refreshing passwd entry cache Dec 16 12:54:53.212776 oslogin_cache_refresh[1578]: Refreshing passwd entry cache Dec 16 12:54:53.219096 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 12:54:53.224554 google_oslogin_nss_cache[1578]: oslogin_cache_refresh[1578]: Failure getting users, quitting Dec 16 12:54:53.226880 oslogin_cache_refresh[1578]: Failure getting users, quitting Dec 16 12:54:53.227741 google_oslogin_nss_cache[1578]: oslogin_cache_refresh[1578]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 12:54:53.227741 google_oslogin_nss_cache[1578]: oslogin_cache_refresh[1578]: Refreshing group entry cache Dec 16 12:54:53.226929 oslogin_cache_refresh[1578]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 12:54:53.226997 oslogin_cache_refresh[1578]: Refreshing group entry cache Dec 16 12:54:53.228046 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 12:54:53.228321 google_oslogin_nss_cache[1578]: oslogin_cache_refresh[1578]: Failure getting groups, quitting Dec 16 12:54:53.228405 oslogin_cache_refresh[1578]: Failure getting groups, quitting Dec 16 12:54:53.231703 google_oslogin_nss_cache[1578]: oslogin_cache_refresh[1578]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 12:54:53.231835 oslogin_cache_refresh[1578]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 12:54:53.239677 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 12:54:53.242229 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 12:54:53.242610 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 12:54:53.243335 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Dec 16 12:54:53.244793 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Dec 16 12:54:53.249341 extend-filesystems[1577]: Found /dev/vda6 Dec 16 12:54:53.258728 extend-filesystems[1577]: Found /dev/vda9 Dec 16 12:54:53.270724 extend-filesystems[1577]: Checking size of /dev/vda9 Dec 16 12:54:53.313675 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Dec 16 12:54:53.318178 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 12:54:53.318621 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 12:54:53.328910 extend-filesystems[1577]: Resized partition /dev/vda9 Dec 16 12:54:53.541523 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 14138363 blocks Dec 16 12:54:53.541581 kernel: EXT4-fs (vda9): resized filesystem to 14138363 Dec 16 12:54:53.541606 kernel: Console: switching to colour dummy device 80x25 Dec 16 12:54:53.541657 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Dec 16 12:54:53.541687 kernel: [drm] features: -context_init Dec 16 12:54:53.541739 coreos-metadata[1573]: Dec 16 12:54:53.432 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Dec 16 12:54:53.541739 coreos-metadata[1573]: Dec 16 12:54:53.460 INFO Fetch successful Dec 16 12:54:53.428615 dbus-daemon[1574]: [system] SELinux support is enabled Dec 16 12:54:53.542755 update_engine[1587]: I20251216 12:54:53.427667 1587 main.cc:92] Flatcar Update Engine starting Dec 16 12:54:53.542755 update_engine[1587]: I20251216 12:54:53.437831 1587 update_check_scheduler.cc:74] Next update check in 4m36s Dec 16 12:54:53.417772 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 12:54:53.543185 extend-filesystems[1616]: resize2fs 1.47.3 (8-Jul-2025) Dec 16 12:54:53.543185 extend-filesystems[1616]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 16 12:54:53.543185 extend-filesystems[1616]: old_desc_blocks = 1, new_desc_blocks = 7 Dec 16 12:54:53.543185 extend-filesystems[1616]: The filesystem on /dev/vda9 is now 14138363 (4k) blocks long. Dec 16 12:54:53.418977 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 12:54:53.547936 extend-filesystems[1577]: Resized filesystem in /dev/vda9 Dec 16 12:54:53.548116 jq[1591]: true Dec 16 12:54:53.542409 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 12:54:53.548615 tar[1594]: linux-amd64/LICENSE Dec 16 12:54:53.548615 tar[1594]: linux-amd64/helm Dec 16 12:54:53.564788 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 12:54:53.565239 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 12:54:53.574295 jq[1619]: true Dec 16 12:54:53.583708 kernel: [drm] number of scanouts: 1 Dec 16 12:54:53.586415 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 12:54:53.586504 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 12:54:53.586815 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 12:54:53.587014 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Dec 16 12:54:53.587045 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 12:54:53.590356 systemd[1]: Started update-engine.service - Update Engine. Dec 16 12:54:53.594522 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 12:54:53.600711 kernel: [drm] number of cap sets: 0 Dec 16 12:54:53.687872 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0 Dec 16 12:54:53.700746 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 16 12:54:53.701223 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 12:54:53.797906 bash[1653]: Updated "/home/core/.ssh/authorized_keys" Dec 16 12:54:53.800040 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 12:54:53.810055 systemd[1]: Starting sshkeys.service... Dec 16 12:54:53.892615 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Dec 16 12:54:53.892735 kernel: Console: switching to colour frame buffer device 128x48 Dec 16 12:54:53.893843 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:54:53.955661 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Dec 16 12:54:54.019480 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 16 12:54:54.028316 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 16 12:54:54.188170 coreos-metadata[1663]: Dec 16 12:54:54.187 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Dec 16 12:54:54.205655 coreos-metadata[1663]: Dec 16 12:54:54.202 INFO Fetch successful Dec 16 12:54:54.249236 unknown[1663]: wrote ssh authorized keys file for user: core Dec 16 12:54:54.290863 sshd_keygen[1609]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 12:54:54.307398 containerd[1617]: time="2025-12-16T12:54:54Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 16 12:54:54.322571 containerd[1617]: time="2025-12-16T12:54:54.316573764Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Dec 16 12:54:54.326163 update-ssh-keys[1670]: Updated "/home/core/.ssh/authorized_keys" Dec 16 12:54:54.331739 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 16 12:54:54.335224 systemd[1]: Finished sshkeys.service. Dec 16 12:54:54.418383 systemd-logind[1584]: New seat seat0. Dec 16 12:54:54.419494 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 12:54:54.423455 containerd[1617]: time="2025-12-16T12:54:54.422657842Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="35.828µs" Dec 16 12:54:54.423455 containerd[1617]: time="2025-12-16T12:54:54.422708071Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 16 12:54:54.423455 containerd[1617]: time="2025-12-16T12:54:54.422764223Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 16 12:54:54.423455 containerd[1617]: time="2025-12-16T12:54:54.422782025Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 16 12:54:54.423455 containerd[1617]: time="2025-12-16T12:54:54.423019949Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 16 12:54:54.423455 containerd[1617]: time="2025-12-16T12:54:54.423049432Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 12:54:54.423455 containerd[1617]: time="2025-12-16T12:54:54.423123954Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 12:54:54.423455 containerd[1617]: time="2025-12-16T12:54:54.423142591Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 12:54:54.423455 containerd[1617]: time="2025-12-16T12:54:54.423454647Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 12:54:54.426895 containerd[1617]: time="2025-12-16T12:54:54.423486683Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 12:54:54.426895 containerd[1617]: time="2025-12-16T12:54:54.423504539Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 12:54:54.426895 containerd[1617]: time="2025-12-16T12:54:54.423516817Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 16 12:54:54.426895 containerd[1617]: time="2025-12-16T12:54:54.423778312Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 16 12:54:54.426895 containerd[1617]: time="2025-12-16T12:54:54.423798825Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 16 12:54:54.426895 containerd[1617]: time="2025-12-16T12:54:54.423899203Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 16 12:54:54.426895 containerd[1617]: time="2025-12-16T12:54:54.424205731Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 12:54:54.426895 containerd[1617]: time="2025-12-16T12:54:54.424253405Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 12:54:54.426895 containerd[1617]: time="2025-12-16T12:54:54.424268983Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 16 12:54:54.426895 containerd[1617]: time="2025-12-16T12:54:54.424318850Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 16 12:54:54.427138 containerd[1617]: time="2025-12-16T12:54:54.427076425Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 16 12:54:54.429246 containerd[1617]: time="2025-12-16T12:54:54.427258655Z" level=info msg="metadata content store policy set" policy=shared Dec 16 12:54:54.433208 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 12:54:54.439378 containerd[1617]: time="2025-12-16T12:54:54.438077282Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 16 12:54:54.439378 containerd[1617]: time="2025-12-16T12:54:54.438182544Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 16 12:54:54.439378 containerd[1617]: time="2025-12-16T12:54:54.438333138Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 16 12:54:54.439378 containerd[1617]: time="2025-12-16T12:54:54.438364609Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 16 12:54:54.439378 containerd[1617]: time="2025-12-16T12:54:54.438386408Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 16 12:54:54.439378 containerd[1617]: time="2025-12-16T12:54:54.438409351Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 16 12:54:54.439378 containerd[1617]: time="2025-12-16T12:54:54.438431559Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 16 12:54:54.439378 containerd[1617]: time="2025-12-16T12:54:54.438450733Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 16 12:54:54.439378 containerd[1617]: time="2025-12-16T12:54:54.438486050Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 16 12:54:54.439378 containerd[1617]: time="2025-12-16T12:54:54.438513012Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 16 12:54:54.439378 containerd[1617]: time="2025-12-16T12:54:54.438534161Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 16 12:54:54.439378 containerd[1617]: time="2025-12-16T12:54:54.438555661Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 16 12:54:54.439378 containerd[1617]: time="2025-12-16T12:54:54.438654762Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 16 12:54:54.439378 containerd[1617]: time="2025-12-16T12:54:54.438681507Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 16 12:54:54.439763 containerd[1617]: time="2025-12-16T12:54:54.438874319Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 16 12:54:54.439763 containerd[1617]: time="2025-12-16T12:54:54.438921398Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 16 12:54:54.439763 containerd[1617]: time="2025-12-16T12:54:54.438954487Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 16 12:54:54.439763 containerd[1617]: time="2025-12-16T12:54:54.438983732Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 16 12:54:54.439763 containerd[1617]: time="2025-12-16T12:54:54.439002104Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 16 12:54:54.439763 containerd[1617]: time="2025-12-16T12:54:54.439021786Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 16 12:54:54.439763 containerd[1617]: time="2025-12-16T12:54:54.439044724Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 16 12:54:54.439763 containerd[1617]: time="2025-12-16T12:54:54.439074607Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 16 12:54:54.439763 containerd[1617]: time="2025-12-16T12:54:54.439094893Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 16 12:54:54.439763 containerd[1617]: time="2025-12-16T12:54:54.439115860Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 16 12:54:54.439763 containerd[1617]: time="2025-12-16T12:54:54.439133593Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 16 12:54:54.439763 containerd[1617]: time="2025-12-16T12:54:54.439179707Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 16 12:54:54.439763 containerd[1617]: time="2025-12-16T12:54:54.439264570Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 16 12:54:54.439763 containerd[1617]: time="2025-12-16T12:54:54.439286554Z" level=info msg="Start snapshots syncer" Dec 16 12:54:54.439763 containerd[1617]: time="2025-12-16T12:54:54.439321985Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 16 12:54:54.440790 systemd-logind[1584]: Watching system buttons on /dev/input/event2 (Power Button) Dec 16 12:54:54.440830 systemd-logind[1584]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 16 12:54:54.441297 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 12:54:54.444064 containerd[1617]: time="2025-12-16T12:54:54.443976332Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 16 12:54:54.447132 containerd[1617]: time="2025-12-16T12:54:54.447065413Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 16 12:54:54.447510 containerd[1617]: time="2025-12-16T12:54:54.447473447Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 16 12:54:54.448552 containerd[1617]: time="2025-12-16T12:54:54.448507762Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 16 12:54:54.449363 containerd[1617]: time="2025-12-16T12:54:54.448749552Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 16 12:54:54.449363 containerd[1617]: time="2025-12-16T12:54:54.448780213Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 16 12:54:54.449363 containerd[1617]: time="2025-12-16T12:54:54.448797783Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 16 12:54:54.449363 containerd[1617]: time="2025-12-16T12:54:54.448815388Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 16 12:54:54.449363 containerd[1617]: time="2025-12-16T12:54:54.448832171Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 16 12:54:54.449363 containerd[1617]: time="2025-12-16T12:54:54.448849565Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 16 12:54:54.449363 containerd[1617]: time="2025-12-16T12:54:54.448864474Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 16 12:54:54.449363 containerd[1617]: time="2025-12-16T12:54:54.448881680Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 16 12:54:54.449098 systemd-logind[1584]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 16 12:54:54.453757 containerd[1617]: time="2025-12-16T12:54:54.450689970Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 12:54:54.453757 containerd[1617]: time="2025-12-16T12:54:54.450747307Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 12:54:54.453757 containerd[1617]: time="2025-12-16T12:54:54.450775711Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 12:54:54.453757 containerd[1617]: time="2025-12-16T12:54:54.450795952Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 12:54:54.453757 containerd[1617]: time="2025-12-16T12:54:54.450811016Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 16 12:54:54.453757 containerd[1617]: time="2025-12-16T12:54:54.450835409Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 16 12:54:54.453757 containerd[1617]: time="2025-12-16T12:54:54.450875868Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 16 12:54:54.453757 containerd[1617]: time="2025-12-16T12:54:54.450910948Z" level=info msg="runtime interface created" Dec 16 12:54:54.453757 containerd[1617]: time="2025-12-16T12:54:54.450927076Z" level=info msg="created NRI interface" Dec 16 12:54:54.453757 containerd[1617]: time="2025-12-16T12:54:54.450951009Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 16 12:54:54.453757 containerd[1617]: time="2025-12-16T12:54:54.450984626Z" level=info msg="Connect containerd service" Dec 16 12:54:54.453757 containerd[1617]: time="2025-12-16T12:54:54.451067690Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 12:54:54.456762 containerd[1617]: time="2025-12-16T12:54:54.456241025Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 12:54:54.458150 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:54:54.458326 locksmithd[1636]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 12:54:54.487807 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 12:54:54.488496 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 12:54:54.495947 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 12:54:54.557248 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 12:54:54.564848 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 12:54:54.575348 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 16 12:54:54.578254 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 12:54:54.578801 systemd-networkd[1510]: eth0: Gained IPv6LL Dec 16 12:54:54.580625 systemd-timesyncd[1481]: Network configuration changed, trying to establish connection. Dec 16 12:54:54.589487 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 12:54:54.591213 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 12:54:54.600411 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:54:54.608333 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 12:54:54.629954 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:54:54.630288 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:54:54.631997 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:54:54.636420 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:54:54.649549 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:54:54.650312 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:54:54.661111 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:54:54.709162 kernel: EDAC MC: Ver: 3.0.0 Dec 16 12:54:54.705852 systemd-networkd[1510]: eth1: Gained IPv6LL Dec 16 12:54:54.707154 systemd-timesyncd[1481]: Network configuration changed, trying to establish connection. Dec 16 12:54:54.799711 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 12:54:54.842568 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:54:54.915004 containerd[1617]: time="2025-12-16T12:54:54.914903724Z" level=info msg="Start subscribing containerd event" Dec 16 12:54:54.915189 containerd[1617]: time="2025-12-16T12:54:54.915000289Z" level=info msg="Start recovering state" Dec 16 12:54:54.915497 containerd[1617]: time="2025-12-16T12:54:54.915457872Z" level=info msg="Start event monitor" Dec 16 12:54:54.915608 containerd[1617]: time="2025-12-16T12:54:54.915505453Z" level=info msg="Start cni network conf syncer for default" Dec 16 12:54:54.915608 containerd[1617]: time="2025-12-16T12:54:54.915515529Z" level=info msg="Start streaming server" Dec 16 12:54:54.915926 containerd[1617]: time="2025-12-16T12:54:54.915852868Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 16 12:54:54.915926 containerd[1617]: time="2025-12-16T12:54:54.915873980Z" level=info msg="runtime interface starting up..." Dec 16 12:54:54.915926 containerd[1617]: time="2025-12-16T12:54:54.915881354Z" level=info msg="starting plugins..." Dec 16 12:54:54.916218 containerd[1617]: time="2025-12-16T12:54:54.916166542Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 12:54:54.916584 containerd[1617]: time="2025-12-16T12:54:54.916552592Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 12:54:54.917208 containerd[1617]: time="2025-12-16T12:54:54.916695893Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 16 12:54:54.917940 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 12:54:54.918410 containerd[1617]: time="2025-12-16T12:54:54.917939727Z" level=info msg="containerd successfully booted in 0.614101s" Dec 16 12:54:55.105077 tar[1594]: linux-amd64/README.md Dec 16 12:54:55.134093 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 16 12:54:55.982084 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:54:55.983267 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 12:54:55.986840 systemd[1]: Startup finished in 2.849s (kernel) + 5.157s (initrd) + 6.334s (userspace) = 14.341s. Dec 16 12:54:55.996236 (kubelet)[1745]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:54:56.368407 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 12:54:56.371162 systemd[1]: Started sshd@0-164.90.155.252:22-147.75.109.163:40832.service - OpenSSH per-connection server daemon (147.75.109.163:40832). Dec 16 12:54:56.514104 sshd[1755]: Accepted publickey for core from 147.75.109.163 port 40832 ssh2: RSA SHA256:LU1qZA7a/A5pU4aT9e5vsRy+gIvYDHbmZDPRbtPLDD8 Dec 16 12:54:56.518705 sshd-session[1755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:54:56.533287 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 12:54:56.535252 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 12:54:56.549799 systemd-logind[1584]: New session 1 of user core. Dec 16 12:54:56.577082 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 12:54:56.584299 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 12:54:56.602840 (systemd)[1760]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 16 12:54:56.608135 systemd-logind[1584]: New session c1 of user core. Dec 16 12:54:56.741207 kubelet[1745]: E1216 12:54:56.741143 1745 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:54:56.745858 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:54:56.746358 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:54:56.747435 systemd[1]: kubelet.service: Consumed 1.324s CPU time, 267.7M memory peak. Dec 16 12:54:56.799465 systemd[1760]: Queued start job for default target default.target. Dec 16 12:54:56.809464 systemd[1760]: Created slice app.slice - User Application Slice. Dec 16 12:54:56.809514 systemd[1760]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Dec 16 12:54:56.809538 systemd[1760]: Reached target paths.target - Paths. Dec 16 12:54:56.809654 systemd[1760]: Reached target timers.target - Timers. Dec 16 12:54:56.811729 systemd[1760]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 12:54:56.813842 systemd[1760]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Dec 16 12:54:56.840229 systemd[1760]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 12:54:56.840582 systemd[1760]: Reached target sockets.target - Sockets. Dec 16 12:54:56.841826 systemd[1760]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Dec 16 12:54:56.842130 systemd[1760]: Reached target basic.target - Basic System. Dec 16 12:54:56.842298 systemd[1760]: Reached target default.target - Main User Target. Dec 16 12:54:56.842472 systemd[1760]: Startup finished in 221ms. Dec 16 12:54:56.843177 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 12:54:56.852162 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 12:54:56.882933 systemd[1]: Started sshd@1-164.90.155.252:22-147.75.109.163:40836.service - OpenSSH per-connection server daemon (147.75.109.163:40836). Dec 16 12:54:56.959046 sshd[1775]: Accepted publickey for core from 147.75.109.163 port 40836 ssh2: RSA SHA256:LU1qZA7a/A5pU4aT9e5vsRy+gIvYDHbmZDPRbtPLDD8 Dec 16 12:54:56.961168 sshd-session[1775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:54:56.968728 systemd-logind[1584]: New session 2 of user core. Dec 16 12:54:56.978001 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 16 12:54:56.999845 sshd[1778]: Connection closed by 147.75.109.163 port 40836 Dec 16 12:54:57.000862 sshd-session[1775]: pam_unix(sshd:session): session closed for user core Dec 16 12:54:57.019250 systemd[1]: sshd@1-164.90.155.252:22-147.75.109.163:40836.service: Deactivated successfully. Dec 16 12:54:57.022001 systemd[1]: session-2.scope: Deactivated successfully. Dec 16 12:54:57.023262 systemd-logind[1584]: Session 2 logged out. Waiting for processes to exit. Dec 16 12:54:57.027394 systemd[1]: Started sshd@2-164.90.155.252:22-147.75.109.163:40844.service - OpenSSH per-connection server daemon (147.75.109.163:40844). Dec 16 12:54:57.029381 systemd-logind[1584]: Removed session 2. Dec 16 12:54:57.101247 sshd[1784]: Accepted publickey for core from 147.75.109.163 port 40844 ssh2: RSA SHA256:LU1qZA7a/A5pU4aT9e5vsRy+gIvYDHbmZDPRbtPLDD8 Dec 16 12:54:57.102936 sshd-session[1784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:54:57.112415 systemd-logind[1584]: New session 3 of user core. Dec 16 12:54:57.122104 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 16 12:54:57.140968 sshd[1787]: Connection closed by 147.75.109.163 port 40844 Dec 16 12:54:57.141567 sshd-session[1784]: pam_unix(sshd:session): session closed for user core Dec 16 12:54:57.157023 systemd[1]: sshd@2-164.90.155.252:22-147.75.109.163:40844.service: Deactivated successfully. Dec 16 12:54:57.159810 systemd[1]: session-3.scope: Deactivated successfully. Dec 16 12:54:57.161087 systemd-logind[1584]: Session 3 logged out. Waiting for processes to exit. Dec 16 12:54:57.167147 systemd[1]: Started sshd@3-164.90.155.252:22-147.75.109.163:40846.service - OpenSSH per-connection server daemon (147.75.109.163:40846). Dec 16 12:54:57.168037 systemd-logind[1584]: Removed session 3. Dec 16 12:54:57.240927 sshd[1793]: Accepted publickey for core from 147.75.109.163 port 40846 ssh2: RSA SHA256:LU1qZA7a/A5pU4aT9e5vsRy+gIvYDHbmZDPRbtPLDD8 Dec 16 12:54:57.242586 sshd-session[1793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:54:57.248718 systemd-logind[1584]: New session 4 of user core. Dec 16 12:54:57.258011 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 12:54:57.279676 sshd[1796]: Connection closed by 147.75.109.163 port 40846 Dec 16 12:54:57.279829 sshd-session[1793]: pam_unix(sshd:session): session closed for user core Dec 16 12:54:57.297750 systemd[1]: sshd@3-164.90.155.252:22-147.75.109.163:40846.service: Deactivated successfully. Dec 16 12:54:57.299744 systemd[1]: session-4.scope: Deactivated successfully. Dec 16 12:54:57.300658 systemd-logind[1584]: Session 4 logged out. Waiting for processes to exit. Dec 16 12:54:57.304038 systemd[1]: Started sshd@4-164.90.155.252:22-147.75.109.163:40858.service - OpenSSH per-connection server daemon (147.75.109.163:40858). Dec 16 12:54:57.305484 systemd-logind[1584]: Removed session 4. Dec 16 12:54:57.376875 sshd[1802]: Accepted publickey for core from 147.75.109.163 port 40858 ssh2: RSA SHA256:LU1qZA7a/A5pU4aT9e5vsRy+gIvYDHbmZDPRbtPLDD8 Dec 16 12:54:57.378745 sshd-session[1802]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:54:57.386104 systemd-logind[1584]: New session 5 of user core. Dec 16 12:54:57.394994 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 12:54:57.425164 sudo[1806]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 16 12:54:57.425478 sudo[1806]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:54:57.440957 sudo[1806]: pam_unix(sudo:session): session closed for user root Dec 16 12:54:57.445175 sshd[1805]: Connection closed by 147.75.109.163 port 40858 Dec 16 12:54:57.445922 sshd-session[1802]: pam_unix(sshd:session): session closed for user core Dec 16 12:54:57.457899 systemd[1]: sshd@4-164.90.155.252:22-147.75.109.163:40858.service: Deactivated successfully. Dec 16 12:54:57.460597 systemd[1]: session-5.scope: Deactivated successfully. Dec 16 12:54:57.463509 systemd-logind[1584]: Session 5 logged out. Waiting for processes to exit. Dec 16 12:54:57.466805 systemd[1]: Started sshd@5-164.90.155.252:22-147.75.109.163:40862.service - OpenSSH per-connection server daemon (147.75.109.163:40862). Dec 16 12:54:57.469509 systemd-logind[1584]: Removed session 5. Dec 16 12:54:57.531949 sshd[1812]: Accepted publickey for core from 147.75.109.163 port 40862 ssh2: RSA SHA256:LU1qZA7a/A5pU4aT9e5vsRy+gIvYDHbmZDPRbtPLDD8 Dec 16 12:54:57.534408 sshd-session[1812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:54:57.540928 systemd-logind[1584]: New session 6 of user core. Dec 16 12:54:57.550977 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 12:54:57.569626 sudo[1817]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 16 12:54:57.570003 sudo[1817]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:54:57.575407 sudo[1817]: pam_unix(sudo:session): session closed for user root Dec 16 12:54:57.583186 sudo[1816]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 16 12:54:57.583478 sudo[1816]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:54:57.600215 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 12:54:57.661000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 16 12:54:57.662918 kernel: kauditd_printk_skb: 39 callbacks suppressed Dec 16 12:54:57.662977 kernel: audit: type=1305 audit(1765889697.661:227): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 16 12:54:57.662995 augenrules[1839]: No rules Dec 16 12:54:57.661000 audit[1839]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffd51c4320 a2=420 a3=0 items=0 ppid=1820 pid=1839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:54:57.666168 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 12:54:57.666647 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 12:54:57.668880 kernel: audit: type=1300 audit(1765889697.661:227): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffd51c4320 a2=420 a3=0 items=0 ppid=1820 pid=1839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:54:57.669025 sudo[1816]: pam_unix(sudo:session): session closed for user root Dec 16 12:54:57.661000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 12:54:57.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:57.673818 kernel: audit: type=1327 audit(1765889697.661:227): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 12:54:57.673901 kernel: audit: type=1130 audit(1765889697.664:228): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:57.676500 kernel: audit: type=1131 audit(1765889697.664:229): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:57.664000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:57.674721 sshd-session[1812]: pam_unix(sshd:session): session closed for user core Dec 16 12:54:57.676790 sshd[1815]: Connection closed by 147.75.109.163 port 40862 Dec 16 12:54:57.668000 audit[1816]: USER_END pid=1816 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:54:57.668000 audit[1816]: CRED_DISP pid=1816 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:54:57.682571 kernel: audit: type=1106 audit(1765889697.668:230): pid=1816 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:54:57.682693 kernel: audit: type=1104 audit(1765889697.668:231): pid=1816 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:54:57.682000 audit[1812]: USER_END pid=1812 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:54:57.683000 audit[1812]: CRED_DISP pid=1812 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:54:57.689110 kernel: audit: type=1106 audit(1765889697.682:232): pid=1812 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:54:57.689183 kernel: audit: type=1104 audit(1765889697.683:233): pid=1812 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:54:57.691915 systemd[1]: sshd@5-164.90.155.252:22-147.75.109.163:40862.service: Deactivated successfully. Dec 16 12:54:57.691000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-164.90.155.252:22-147.75.109.163:40862 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:57.694663 kernel: audit: type=1131 audit(1765889697.691:234): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-164.90.155.252:22-147.75.109.163:40862 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:57.694807 systemd[1]: session-6.scope: Deactivated successfully. Dec 16 12:54:57.697257 systemd-logind[1584]: Session 6 logged out. Waiting for processes to exit. Dec 16 12:54:57.700984 systemd[1]: Started sshd@6-164.90.155.252:22-147.75.109.163:40870.service - OpenSSH per-connection server daemon (147.75.109.163:40870). Dec 16 12:54:57.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-164.90.155.252:22-147.75.109.163:40870 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:57.702365 systemd-logind[1584]: Removed session 6. Dec 16 12:54:57.770000 audit[1849]: USER_ACCT pid=1849 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:54:57.771161 sshd[1849]: Accepted publickey for core from 147.75.109.163 port 40870 ssh2: RSA SHA256:LU1qZA7a/A5pU4aT9e5vsRy+gIvYDHbmZDPRbtPLDD8 Dec 16 12:54:57.771000 audit[1849]: CRED_ACQ pid=1849 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:54:57.771000 audit[1849]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe9a129a50 a2=3 a3=0 items=0 ppid=1 pid=1849 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:54:57.771000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:54:57.772759 sshd-session[1849]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:54:57.778864 systemd-logind[1584]: New session 7 of user core. Dec 16 12:54:57.799301 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 12:54:57.802000 audit[1849]: USER_START pid=1849 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:54:57.805000 audit[1852]: CRED_ACQ pid=1852 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:54:57.820000 audit[1853]: USER_ACCT pid=1853 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:54:57.820986 sudo[1853]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 12:54:57.821273 sudo[1853]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:54:57.820000 audit[1853]: CRED_REFR pid=1853 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:54:57.823000 audit[1853]: USER_START pid=1853 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:54:58.329449 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 16 12:54:58.344167 (dockerd)[1871]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 16 12:54:58.699526 dockerd[1871]: time="2025-12-16T12:54:58.699395820Z" level=info msg="Starting up" Dec 16 12:54:58.702617 dockerd[1871]: time="2025-12-16T12:54:58.702576987Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 16 12:54:58.719346 dockerd[1871]: time="2025-12-16T12:54:58.719190750Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 16 12:54:58.799482 dockerd[1871]: time="2025-12-16T12:54:58.799201817Z" level=info msg="Loading containers: start." Dec 16 12:54:58.812669 kernel: Initializing XFRM netlink socket Dec 16 12:54:58.885000 audit[1919]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1919 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:54:58.885000 audit[1919]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fffdcaab770 a2=0 a3=0 items=0 ppid=1871 pid=1919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:54:58.885000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 16 12:54:58.888000 audit[1921]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1921 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:54:58.888000 audit[1921]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffe5e8969b0 a2=0 a3=0 items=0 ppid=1871 pid=1921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:54:58.888000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 16 12:54:58.891000 audit[1923]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1923 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:54:58.891000 audit[1923]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc0f4812e0 a2=0 a3=0 items=0 ppid=1871 pid=1923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:54:58.891000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 16 12:54:58.893000 audit[1925]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1925 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:54:58.893000 audit[1925]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffffce3f4c0 a2=0 a3=0 items=0 ppid=1871 pid=1925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:54:58.893000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 16 12:54:58.896000 audit[1927]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1927 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:54:58.896000 audit[1927]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe4c160170 a2=0 a3=0 items=0 ppid=1871 pid=1927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:54:58.896000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 16 12:54:58.899000 audit[1929]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1929 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:54:58.899000 audit[1929]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffca911d8a0 a2=0 a3=0 items=0 ppid=1871 pid=1929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:54:58.899000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 12:54:58.902000 audit[1931]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1931 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:54:58.902000 audit[1931]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc0b58f1d0 a2=0 a3=0 items=0 ppid=1871 pid=1931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:54:58.902000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 12:54:58.906000 audit[1933]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1933 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:54:58.906000 audit[1933]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffccab0f330 a2=0 a3=0 items=0 ppid=1871 pid=1933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:54:58.906000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 16 12:54:58.936000 audit[1936]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1936 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:54:58.936000 audit[1936]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7ffe564a8320 a2=0 a3=0 items=0 ppid=1871 pid=1936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:54:58.936000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Dec 16 12:54:58.939000 audit[1938]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1938 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:54:58.939000 audit[1938]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff71939020 a2=0 a3=0 items=0 ppid=1871 pid=1938 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:54:58.939000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 16 12:54:58.944000 audit[1940]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1940 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:54:58.944000 audit[1940]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffd45b9ef50 a2=0 a3=0 items=0 ppid=1871 pid=1940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:54:58.944000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 16 12:54:58.948000 audit[1942]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1942 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:54:58.948000 audit[1942]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffdb7f406a0 a2=0 a3=0 items=0 ppid=1871 pid=1942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:54:58.948000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 12:54:58.950000 audit[1944]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1944 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:54:58.950000 audit[1944]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffd8311f3a0 a2=0 a3=0 items=0 ppid=1871 pid=1944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:54:58.950000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 16 12:54:59.000000 audit[1974]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=1974 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:54:59.000000 audit[1974]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffc04d69f90 a2=0 a3=0 items=0 ppid=1871 pid=1974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:54:59.000000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 16 12:54:59.003000 audit[1976]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=1976 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:54:59.003000 audit[1976]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffcc4d0d210 a2=0 a3=0 items=0 ppid=1871 pid=1976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:54:59.003000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 16 12:54:59.006000 audit[1978]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=1978 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:54:59.006000 audit[1978]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcf6581ea0 a2=0 a3=0 items=0 ppid=1871 pid=1978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:54:59.006000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 16 12:54:59.009000 audit[1980]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=1980 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:54:59.009000 audit[1980]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcc49021d0 a2=0 a3=0 items=0 ppid=1871 pid=1980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:54:59.009000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 16 12:54:59.012000 audit[1982]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=1982 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:54:59.012000 audit[1982]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe3a1a06d0 a2=0 a3=0 items=0 ppid=1871 pid=1982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:54:59.012000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 16 12:54:59.015000 audit[1984]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=1984 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:54:59.015000 audit[1984]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff6d372a40 a2=0 a3=0 items=0 ppid=1871 pid=1984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:54:59.015000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 12:54:59.018000 audit[1986]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=1986 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:54:59.018000 audit[1986]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fffe5e09520 a2=0 a3=0 items=0 ppid=1871 pid=1986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:54:59.018000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 12:54:59.021000 audit[1988]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=1988 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:54:59.021000 audit[1988]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffcb1cf1b50 a2=0 a3=0 items=0 ppid=1871 pid=1988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:54:59.021000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 16 12:54:59.024000 audit[1990]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=1990 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:54:59.024000 audit[1990]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffda144ec60 a2=0 a3=0 items=0 ppid=1871 pid=1990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:54:59.024000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Dec 16 12:54:59.027000 audit[1992]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=1992 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:54:59.027000 audit[1992]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff9e1afb10 a2=0 a3=0 items=0 ppid=1871 pid=1992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:54:59.027000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 16 12:54:59.030000 audit[1994]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=1994 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:54:59.030000 audit[1994]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffc67495e40 a2=0 a3=0 items=0 ppid=1871 pid=1994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:54:59.030000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 16 12:54:59.033000 audit[1996]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=1996 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:54:59.033000 audit[1996]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7fff61eb0350 a2=0 a3=0 items=0 ppid=1871 pid=1996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:54:59.033000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 12:54:59.035000 audit[1998]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=1998 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:54:59.035000 audit[1998]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffde385df70 a2=0 a3=0 items=0 ppid=1871 pid=1998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:54:59.035000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 16 12:54:59.043000 audit[2003]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2003 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:54:59.043000 audit[2003]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd4a5b8890 a2=0 a3=0 items=0 ppid=1871 pid=2003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:54:59.043000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 16 12:54:59.047000 audit[2005]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2005 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:54:59.047000 audit[2005]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffffea9a090 a2=0 a3=0 items=0 ppid=1871 pid=2005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:54:59.047000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 16 12:54:59.050000 audit[2007]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2007 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:54:59.050000 audit[2007]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffe839d4730 a2=0 a3=0 items=0 ppid=1871 pid=2007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:54:59.050000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 16 12:54:59.052000 audit[2009]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2009 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:54:59.052000 audit[2009]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff4892c770 a2=0 a3=0 items=0 ppid=1871 pid=2009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:54:59.052000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 16 12:54:59.056000 audit[2011]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2011 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:54:59.056000 audit[2011]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffda3e33ef0 a2=0 a3=0 items=0 ppid=1871 pid=2011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:54:59.056000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 16 12:54:59.058000 audit[2013]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2013 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:54:59.058000 audit[2013]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fffa7d711e0 a2=0 a3=0 items=0 ppid=1871 pid=2013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:54:59.058000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 16 12:54:59.065364 systemd-timesyncd[1481]: Network configuration changed, trying to establish connection. Dec 16 12:54:59.068175 systemd-timesyncd[1481]: Network configuration changed, trying to establish connection. Dec 16 12:54:59.081427 systemd-timesyncd[1481]: Network configuration changed, trying to establish connection. Dec 16 12:54:59.091000 audit[2018]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2018 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:54:59.091000 audit[2018]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffee1ca4100 a2=0 a3=0 items=0 ppid=1871 pid=2018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:54:59.091000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Dec 16 12:54:59.094000 audit[2020]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2020 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:54:59.094000 audit[2020]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffe770d76f0 a2=0 a3=0 items=0 ppid=1871 pid=2020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:54:59.094000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Dec 16 12:54:59.107000 audit[2028]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2028 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:54:59.107000 audit[2028]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffc351a98f0 a2=0 a3=0 items=0 ppid=1871 pid=2028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:54:59.107000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Dec 16 12:54:59.119000 audit[2034]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2034 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:54:59.119000 audit[2034]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7fffcdebcf90 a2=0 a3=0 items=0 ppid=1871 pid=2034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:54:59.119000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Dec 16 12:54:59.122000 audit[2036]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2036 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:54:59.122000 audit[2036]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffd178b6740 a2=0 a3=0 items=0 ppid=1871 pid=2036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:54:59.122000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Dec 16 12:54:59.125000 audit[2038]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2038 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:54:59.125000 audit[2038]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fffb593ab60 a2=0 a3=0 items=0 ppid=1871 pid=2038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:54:59.125000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Dec 16 12:54:59.128000 audit[2040]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2040 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:54:59.128000 audit[2040]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7fff5e6acbd0 a2=0 a3=0 items=0 ppid=1871 pid=2040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:54:59.128000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 12:54:59.131000 audit[2042]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2042 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:54:59.131000 audit[2042]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc2610a7b0 a2=0 a3=0 items=0 ppid=1871 pid=2042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:54:59.131000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Dec 16 12:54:59.133334 systemd-networkd[1510]: docker0: Link UP Dec 16 12:54:59.134125 systemd-timesyncd[1481]: Network configuration changed, trying to establish connection. Dec 16 12:54:59.136463 dockerd[1871]: time="2025-12-16T12:54:59.136399358Z" level=info msg="Loading containers: done." Dec 16 12:54:59.153653 dockerd[1871]: time="2025-12-16T12:54:59.153551763Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 16 12:54:59.153852 dockerd[1871]: time="2025-12-16T12:54:59.153723300Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 16 12:54:59.153852 dockerd[1871]: time="2025-12-16T12:54:59.153845243Z" level=info msg="Initializing buildkit" Dec 16 12:54:59.184984 dockerd[1871]: time="2025-12-16T12:54:59.184912548Z" level=info msg="Completed buildkit initialization" Dec 16 12:54:59.196206 dockerd[1871]: time="2025-12-16T12:54:59.196132791Z" level=info msg="Daemon has completed initialization" Dec 16 12:54:59.196893 dockerd[1871]: time="2025-12-16T12:54:59.196205063Z" level=info msg="API listen on /run/docker.sock" Dec 16 12:54:59.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:54:59.197173 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 16 12:55:00.092664 containerd[1617]: time="2025-12-16T12:55:00.092580993Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Dec 16 12:55:00.757133 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3115172002.mount: Deactivated successfully. Dec 16 12:55:02.227315 containerd[1617]: time="2025-12-16T12:55:02.227245803Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:55:02.229152 containerd[1617]: time="2025-12-16T12:55:02.229088428Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=28445968" Dec 16 12:55:02.229863 containerd[1617]: time="2025-12-16T12:55:02.229808933Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:55:02.234555 containerd[1617]: time="2025-12-16T12:55:02.234482187Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:55:02.237095 containerd[1617]: time="2025-12-16T12:55:02.237024206Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 2.144360619s" Dec 16 12:55:02.240127 containerd[1617]: time="2025-12-16T12:55:02.237166905Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Dec 16 12:55:02.240679 containerd[1617]: time="2025-12-16T12:55:02.238606447Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Dec 16 12:55:04.079684 containerd[1617]: time="2025-12-16T12:55:04.078793122Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:55:04.081031 containerd[1617]: time="2025-12-16T12:55:04.080979277Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26008626" Dec 16 12:55:04.084296 containerd[1617]: time="2025-12-16T12:55:04.084175650Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:55:04.089616 containerd[1617]: time="2025-12-16T12:55:04.089520611Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:55:04.090728 containerd[1617]: time="2025-12-16T12:55:04.090493429Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 1.849781814s" Dec 16 12:55:04.090728 containerd[1617]: time="2025-12-16T12:55:04.090546782Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Dec 16 12:55:04.091349 containerd[1617]: time="2025-12-16T12:55:04.091317855Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Dec 16 12:55:05.433428 containerd[1617]: time="2025-12-16T12:55:05.432738406Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:55:05.434272 containerd[1617]: time="2025-12-16T12:55:05.433781070Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=0" Dec 16 12:55:05.435409 containerd[1617]: time="2025-12-16T12:55:05.435358695Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:55:05.439259 containerd[1617]: time="2025-12-16T12:55:05.439182894Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:55:05.440324 containerd[1617]: time="2025-12-16T12:55:05.440095981Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 1.348639038s" Dec 16 12:55:05.440324 containerd[1617]: time="2025-12-16T12:55:05.440131240Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Dec 16 12:55:05.441518 containerd[1617]: time="2025-12-16T12:55:05.441467868Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Dec 16 12:55:06.502602 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2896434784.mount: Deactivated successfully. Dec 16 12:55:06.954492 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 12:55:06.958505 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:55:07.181224 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:55:07.182660 kernel: kauditd_printk_skb: 132 callbacks suppressed Dec 16 12:55:07.182779 kernel: audit: type=1130 audit(1765889707.180:285): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:55:07.180000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:55:07.193845 containerd[1617]: time="2025-12-16T12:55:07.193785547Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:55:07.195015 containerd[1617]: time="2025-12-16T12:55:07.194711055Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=20340589" Dec 16 12:55:07.195962 containerd[1617]: time="2025-12-16T12:55:07.195913606Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:55:07.199982 containerd[1617]: time="2025-12-16T12:55:07.198516613Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:55:07.199076 (kubelet)[2173]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:55:07.201154 containerd[1617]: time="2025-12-16T12:55:07.200115570Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 1.758431901s" Dec 16 12:55:07.201154 containerd[1617]: time="2025-12-16T12:55:07.200234379Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Dec 16 12:55:07.203237 containerd[1617]: time="2025-12-16T12:55:07.202899671Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Dec 16 12:55:07.205420 systemd-resolved[1294]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Dec 16 12:55:07.275914 kubelet[2173]: E1216 12:55:07.275698 2173 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:55:07.281011 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:55:07.281169 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:55:07.281617 systemd[1]: kubelet.service: Consumed 225ms CPU time, 110.7M memory peak. Dec 16 12:55:07.280000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 12:55:07.285684 kernel: audit: type=1131 audit(1765889707.280:286): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 12:55:07.765676 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1783133832.mount: Deactivated successfully. Dec 16 12:55:08.662593 containerd[1617]: time="2025-12-16T12:55:08.662521423Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:55:08.664791 containerd[1617]: time="2025-12-16T12:55:08.664705633Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=0" Dec 16 12:55:08.665332 containerd[1617]: time="2025-12-16T12:55:08.665252472Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:55:08.669378 containerd[1617]: time="2025-12-16T12:55:08.669281332Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:55:08.670970 containerd[1617]: time="2025-12-16T12:55:08.670790141Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.467839147s" Dec 16 12:55:08.670970 containerd[1617]: time="2025-12-16T12:55:08.670840808Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Dec 16 12:55:08.672422 containerd[1617]: time="2025-12-16T12:55:08.672364716Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 16 12:55:09.251523 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1800517065.mount: Deactivated successfully. Dec 16 12:55:09.256795 containerd[1617]: time="2025-12-16T12:55:09.255585722Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:55:09.256795 containerd[1617]: time="2025-12-16T12:55:09.256671659Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 16 12:55:09.256795 containerd[1617]: time="2025-12-16T12:55:09.256720863Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:55:09.259039 containerd[1617]: time="2025-12-16T12:55:09.258984973Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:55:09.260071 containerd[1617]: time="2025-12-16T12:55:09.260035473Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 587.630473ms" Dec 16 12:55:09.260291 containerd[1617]: time="2025-12-16T12:55:09.260274631Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Dec 16 12:55:09.261351 containerd[1617]: time="2025-12-16T12:55:09.261271168Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Dec 16 12:55:09.851778 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2936455302.mount: Deactivated successfully. Dec 16 12:55:10.256900 systemd-resolved[1294]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Dec 16 12:55:12.707196 containerd[1617]: time="2025-12-16T12:55:12.707126746Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:55:12.708921 containerd[1617]: time="2025-12-16T12:55:12.708867813Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=46127678" Dec 16 12:55:12.709266 containerd[1617]: time="2025-12-16T12:55:12.709236189Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:55:12.712569 containerd[1617]: time="2025-12-16T12:55:12.712487704Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:55:12.713672 containerd[1617]: time="2025-12-16T12:55:12.713426082Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 3.452071182s" Dec 16 12:55:12.713672 containerd[1617]: time="2025-12-16T12:55:12.713470004Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Dec 16 12:55:16.576992 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:55:16.577175 systemd[1]: kubelet.service: Consumed 225ms CPU time, 110.7M memory peak. Dec 16 12:55:16.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:55:16.584231 kernel: audit: type=1130 audit(1765889716.576:287): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:55:16.584371 kernel: audit: type=1131 audit(1765889716.576:288): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:55:16.576000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:55:16.581947 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:55:16.621961 systemd[1]: Reload requested from client PID 2318 ('systemctl') (unit session-7.scope)... Dec 16 12:55:16.621981 systemd[1]: Reloading... Dec 16 12:55:16.802696 zram_generator::config[2367]: No configuration found. Dec 16 12:55:17.095667 systemd[1]: Reloading finished in 473 ms. Dec 16 12:55:17.134754 kernel: audit: type=1334 audit(1765889717.130:289): prog-id=61 op=LOAD Dec 16 12:55:17.130000 audit: BPF prog-id=61 op=LOAD Dec 16 12:55:17.138375 kernel: audit: type=1334 audit(1765889717.130:290): prog-id=62 op=LOAD Dec 16 12:55:17.130000 audit: BPF prog-id=62 op=LOAD Dec 16 12:55:17.130000 audit: BPF prog-id=54 op=UNLOAD Dec 16 12:55:17.130000 audit: BPF prog-id=55 op=UNLOAD Dec 16 12:55:17.144664 kernel: audit: type=1334 audit(1765889717.130:291): prog-id=54 op=UNLOAD Dec 16 12:55:17.144771 kernel: audit: type=1334 audit(1765889717.130:292): prog-id=55 op=UNLOAD Dec 16 12:55:17.144793 kernel: audit: type=1334 audit(1765889717.132:293): prog-id=63 op=LOAD Dec 16 12:55:17.132000 audit: BPF prog-id=63 op=LOAD Dec 16 12:55:17.132000 audit: BPF prog-id=47 op=UNLOAD Dec 16 12:55:17.147737 kernel: audit: type=1334 audit(1765889717.132:294): prog-id=47 op=UNLOAD Dec 16 12:55:17.147847 kernel: audit: type=1334 audit(1765889717.132:295): prog-id=64 op=LOAD Dec 16 12:55:17.132000 audit: BPF prog-id=64 op=LOAD Dec 16 12:55:17.133000 audit: BPF prog-id=65 op=LOAD Dec 16 12:55:17.133000 audit: BPF prog-id=48 op=UNLOAD Dec 16 12:55:17.133000 audit: BPF prog-id=49 op=UNLOAD Dec 16 12:55:17.133000 audit: BPF prog-id=66 op=LOAD Dec 16 12:55:17.133000 audit: BPF prog-id=56 op=UNLOAD Dec 16 12:55:17.135000 audit: BPF prog-id=67 op=LOAD Dec 16 12:55:17.135000 audit: BPF prog-id=41 op=UNLOAD Dec 16 12:55:17.136000 audit: BPF prog-id=68 op=LOAD Dec 16 12:55:17.136000 audit: BPF prog-id=69 op=LOAD Dec 16 12:55:17.136000 audit: BPF prog-id=42 op=UNLOAD Dec 16 12:55:17.136000 audit: BPF prog-id=43 op=UNLOAD Dec 16 12:55:17.136000 audit: BPF prog-id=70 op=LOAD Dec 16 12:55:17.136000 audit: BPF prog-id=57 op=UNLOAD Dec 16 12:55:17.140000 audit: BPF prog-id=71 op=LOAD Dec 16 12:55:17.142000 audit: BPF prog-id=58 op=UNLOAD Dec 16 12:55:17.142000 audit: BPF prog-id=72 op=LOAD Dec 16 12:55:17.142000 audit: BPF prog-id=73 op=LOAD Dec 16 12:55:17.142000 audit: BPF prog-id=59 op=UNLOAD Dec 16 12:55:17.142000 audit: BPF prog-id=60 op=UNLOAD Dec 16 12:55:17.143000 audit: BPF prog-id=74 op=LOAD Dec 16 12:55:17.143000 audit: BPF prog-id=44 op=UNLOAD Dec 16 12:55:17.143000 audit: BPF prog-id=75 op=LOAD Dec 16 12:55:17.143000 audit: BPF prog-id=76 op=LOAD Dec 16 12:55:17.143000 audit: BPF prog-id=45 op=UNLOAD Dec 16 12:55:17.143000 audit: BPF prog-id=46 op=UNLOAD Dec 16 12:55:17.146000 audit: BPF prog-id=77 op=LOAD Dec 16 12:55:17.147000 audit: BPF prog-id=50 op=UNLOAD Dec 16 12:55:17.148000 audit: BPF prog-id=78 op=LOAD Dec 16 12:55:17.148000 audit: BPF prog-id=51 op=UNLOAD Dec 16 12:55:17.148000 audit: BPF prog-id=79 op=LOAD Dec 16 12:55:17.151657 kernel: audit: type=1334 audit(1765889717.133:296): prog-id=65 op=LOAD Dec 16 12:55:17.148000 audit: BPF prog-id=80 op=LOAD Dec 16 12:55:17.148000 audit: BPF prog-id=52 op=UNLOAD Dec 16 12:55:17.148000 audit: BPF prog-id=53 op=UNLOAD Dec 16 12:55:17.177854 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 12:55:17.177976 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 12:55:17.178403 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:55:17.178489 systemd[1]: kubelet.service: Consumed 133ms CPU time, 98.8M memory peak. Dec 16 12:55:17.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 12:55:17.180613 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:55:17.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:55:17.356314 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:55:17.367079 (kubelet)[2417]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 12:55:17.440675 kubelet[2417]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:55:17.440675 kubelet[2417]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 12:55:17.440675 kubelet[2417]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:55:17.443409 kubelet[2417]: I1216 12:55:17.443286 2417 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 12:55:18.258662 kubelet[2417]: I1216 12:55:18.258243 2417 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 16 12:55:18.258662 kubelet[2417]: I1216 12:55:18.258294 2417 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 12:55:18.258907 kubelet[2417]: I1216 12:55:18.258877 2417 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 12:55:18.322875 kubelet[2417]: I1216 12:55:18.322828 2417 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 12:55:18.323835 kubelet[2417]: E1216 12:55:18.323623 2417 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://164.90.155.252:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 164.90.155.252:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 16 12:55:18.345213 kubelet[2417]: I1216 12:55:18.345187 2417 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 12:55:18.356681 kubelet[2417]: I1216 12:55:18.356447 2417 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 12:55:18.359862 kubelet[2417]: I1216 12:55:18.359751 2417 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 12:55:18.365427 kubelet[2417]: I1216 12:55:18.360195 2417 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4515.1.0-3-ef2be4b8ba","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 12:55:18.365427 kubelet[2417]: I1216 12:55:18.365072 2417 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 12:55:18.365427 kubelet[2417]: I1216 12:55:18.365098 2417 container_manager_linux.go:303] "Creating device plugin manager" Dec 16 12:55:18.365427 kubelet[2417]: I1216 12:55:18.365329 2417 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:55:18.370771 kubelet[2417]: I1216 12:55:18.370727 2417 kubelet.go:480] "Attempting to sync node with API server" Dec 16 12:55:18.370998 kubelet[2417]: I1216 12:55:18.370983 2417 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 12:55:18.371096 kubelet[2417]: I1216 12:55:18.371088 2417 kubelet.go:386] "Adding apiserver pod source" Dec 16 12:55:18.371165 kubelet[2417]: I1216 12:55:18.371157 2417 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 12:55:18.379010 kubelet[2417]: E1216 12:55:18.378506 2417 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://164.90.155.252:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4515.1.0-3-ef2be4b8ba&limit=500&resourceVersion=0\": dial tcp 164.90.155.252:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 12:55:18.384209 kubelet[2417]: I1216 12:55:18.384169 2417 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 16 12:55:18.385101 kubelet[2417]: I1216 12:55:18.384735 2417 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 12:55:18.386334 kubelet[2417]: W1216 12:55:18.386297 2417 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 12:55:18.388341 kubelet[2417]: E1216 12:55:18.388298 2417 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://164.90.155.252:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 164.90.155.252:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 12:55:18.393750 kubelet[2417]: I1216 12:55:18.393704 2417 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 12:55:18.393948 kubelet[2417]: I1216 12:55:18.393805 2417 server.go:1289] "Started kubelet" Dec 16 12:55:18.401677 kubelet[2417]: I1216 12:55:18.399820 2417 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 12:55:18.402730 kubelet[2417]: I1216 12:55:18.402665 2417 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 12:55:18.404463 kubelet[2417]: I1216 12:55:18.404430 2417 server.go:317] "Adding debug handlers to kubelet server" Dec 16 12:55:18.415070 kubelet[2417]: I1216 12:55:18.414775 2417 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 12:55:18.415235 kubelet[2417]: E1216 12:55:18.415195 2417 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-3-ef2be4b8ba\" not found" Dec 16 12:55:18.415000 audit[2433]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2433 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:55:18.415000 audit[2433]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc65a26620 a2=0 a3=0 items=0 ppid=2417 pid=2433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:18.415000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 12:55:18.416986 kubelet[2417]: I1216 12:55:18.416849 2417 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 12:55:18.417321 kubelet[2417]: I1216 12:55:18.417265 2417 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 12:55:18.417400 kubelet[2417]: I1216 12:55:18.417336 2417 reconciler.go:26] "Reconciler: start to sync state" Dec 16 12:55:18.417510 kubelet[2417]: I1216 12:55:18.417495 2417 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 12:55:18.418000 audit[2434]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2434 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:55:18.418000 audit[2434]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffda609e800 a2=0 a3=0 items=0 ppid=2417 pid=2434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:18.418000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 16 12:55:18.422656 kubelet[2417]: I1216 12:55:18.422085 2417 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 12:55:18.423000 audit[2436]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2436 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:55:18.425998 kubelet[2417]: E1216 12:55:18.423024 2417 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://164.90.155.252:6443/api/v1/namespaces/default/events\": dial tcp 164.90.155.252:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4515.1.0-3-ef2be4b8ba.1881b357c15fbe1a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4515.1.0-3-ef2be4b8ba,UID:ci-4515.1.0-3-ef2be4b8ba,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4515.1.0-3-ef2be4b8ba,},FirstTimestamp:2025-12-16 12:55:18.393749018 +0000 UTC m=+1.020544944,LastTimestamp:2025-12-16 12:55:18.393749018 +0000 UTC m=+1.020544944,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4515.1.0-3-ef2be4b8ba,}" Dec 16 12:55:18.423000 audit[2436]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffe9b9bcae0 a2=0 a3=0 items=0 ppid=2417 pid=2436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:18.423000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 12:55:18.427036 kubelet[2417]: E1216 12:55:18.426998 2417 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://164.90.155.252:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 164.90.155.252:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 12:55:18.427126 kubelet[2417]: E1216 12:55:18.427091 2417 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://164.90.155.252:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515.1.0-3-ef2be4b8ba?timeout=10s\": dial tcp 164.90.155.252:6443: connect: connection refused" interval="200ms" Dec 16 12:55:18.427643 kubelet[2417]: I1216 12:55:18.427601 2417 factory.go:223] Registration of the systemd container factory successfully Dec 16 12:55:18.427753 kubelet[2417]: I1216 12:55:18.427735 2417 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 12:55:18.429843 kubelet[2417]: I1216 12:55:18.429819 2417 factory.go:223] Registration of the containerd container factory successfully Dec 16 12:55:18.430000 audit[2438]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2438 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:55:18.430000 audit[2438]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffcab497480 a2=0 a3=0 items=0 ppid=2417 pid=2438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:18.430000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 12:55:18.440000 audit[2441]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2441 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:55:18.440000 audit[2441]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7fff2d1a5c80 a2=0 a3=0 items=0 ppid=2417 pid=2441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:18.440000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Dec 16 12:55:18.443000 audit[2442]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2442 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:55:18.443000 audit[2442]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffeeb950640 a2=0 a3=0 items=0 ppid=2417 pid=2442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:18.443000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 12:55:18.444124 kubelet[2417]: I1216 12:55:18.442086 2417 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 16 12:55:18.444447 kubelet[2417]: I1216 12:55:18.444159 2417 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 16 12:55:18.444447 kubelet[2417]: I1216 12:55:18.444198 2417 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 16 12:55:18.444447 kubelet[2417]: I1216 12:55:18.444237 2417 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 12:55:18.444447 kubelet[2417]: I1216 12:55:18.444247 2417 kubelet.go:2436] "Starting kubelet main sync loop" Dec 16 12:55:18.444447 kubelet[2417]: E1216 12:55:18.444316 2417 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 12:55:18.445000 audit[2443]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2443 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:55:18.445000 audit[2443]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc966dea90 a2=0 a3=0 items=0 ppid=2417 pid=2443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:18.445000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 16 12:55:18.447000 audit[2444]: NETFILTER_CFG table=nat:49 family=2 entries=1 op=nft_register_chain pid=2444 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:55:18.447000 audit[2444]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe43013d40 a2=0 a3=0 items=0 ppid=2417 pid=2444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:18.447000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 16 12:55:18.449000 audit[2447]: NETFILTER_CFG table=mangle:50 family=10 entries=1 op=nft_register_chain pid=2447 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:55:18.449000 audit[2447]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdf1cfb050 a2=0 a3=0 items=0 ppid=2417 pid=2447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:18.449000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 16 12:55:18.452976 kubelet[2417]: E1216 12:55:18.452943 2417 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 12:55:18.453166 kubelet[2417]: E1216 12:55:18.453143 2417 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://164.90.155.252:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 164.90.155.252:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 12:55:18.453000 audit[2449]: NETFILTER_CFG table=nat:51 family=10 entries=1 op=nft_register_chain pid=2449 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:55:18.453000 audit[2449]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd21319050 a2=0 a3=0 items=0 ppid=2417 pid=2449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:18.453000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 16 12:55:18.457000 audit[2450]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2450 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:55:18.457000 audit[2450]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc02b917a0 a2=0 a3=0 items=0 ppid=2417 pid=2450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:18.457000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 16 12:55:18.460000 audit[2452]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2452 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:55:18.460000 audit[2452]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc06a3e330 a2=0 a3=0 items=0 ppid=2417 pid=2452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:18.460000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 16 12:55:18.464625 kubelet[2417]: I1216 12:55:18.464541 2417 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 12:55:18.464625 kubelet[2417]: I1216 12:55:18.464616 2417 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 12:55:18.464823 kubelet[2417]: I1216 12:55:18.464688 2417 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:55:18.466121 kubelet[2417]: I1216 12:55:18.466076 2417 policy_none.go:49] "None policy: Start" Dec 16 12:55:18.466121 kubelet[2417]: I1216 12:55:18.466107 2417 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 12:55:18.466121 kubelet[2417]: I1216 12:55:18.466122 2417 state_mem.go:35] "Initializing new in-memory state store" Dec 16 12:55:18.474087 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 12:55:18.493290 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 12:55:18.513998 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 12:55:18.515971 kubelet[2417]: E1216 12:55:18.515769 2417 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-3-ef2be4b8ba\" not found" Dec 16 12:55:18.516651 kubelet[2417]: E1216 12:55:18.516587 2417 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 12:55:18.517030 kubelet[2417]: I1216 12:55:18.516983 2417 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 12:55:18.517215 kubelet[2417]: I1216 12:55:18.517000 2417 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 12:55:18.518037 kubelet[2417]: I1216 12:55:18.517988 2417 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 12:55:18.519690 kubelet[2417]: E1216 12:55:18.519667 2417 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 12:55:18.519819 kubelet[2417]: E1216 12:55:18.519801 2417 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4515.1.0-3-ef2be4b8ba\" not found" Dec 16 12:55:18.560743 systemd[1]: Created slice kubepods-burstable-pod5f35eb11cacaf03543b99c51360ac568.slice - libcontainer container kubepods-burstable-pod5f35eb11cacaf03543b99c51360ac568.slice. Dec 16 12:55:18.581170 kubelet[2417]: E1216 12:55:18.581117 2417 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-3-ef2be4b8ba\" not found" node="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:55:18.585687 systemd[1]: Created slice kubepods-burstable-podf6e1aba290bf0d5830d9547aef9b44bc.slice - libcontainer container kubepods-burstable-podf6e1aba290bf0d5830d9547aef9b44bc.slice. Dec 16 12:55:18.589262 kubelet[2417]: E1216 12:55:18.589195 2417 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-3-ef2be4b8ba\" not found" node="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:55:18.595266 systemd[1]: Created slice kubepods-burstable-pod87816fa8f93521a5b540cfe0bb82be90.slice - libcontainer container kubepods-burstable-pod87816fa8f93521a5b540cfe0bb82be90.slice. Dec 16 12:55:18.600995 kubelet[2417]: E1216 12:55:18.600928 2417 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-3-ef2be4b8ba\" not found" node="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:55:18.619029 kubelet[2417]: I1216 12:55:18.618615 2417 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:55:18.619337 kubelet[2417]: E1216 12:55:18.619311 2417 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://164.90.155.252:6443/api/v1/nodes\": dial tcp 164.90.155.252:6443: connect: connection refused" node="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:55:18.628215 kubelet[2417]: E1216 12:55:18.628147 2417 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://164.90.155.252:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515.1.0-3-ef2be4b8ba?timeout=10s\": dial tcp 164.90.155.252:6443: connect: connection refused" interval="400ms" Dec 16 12:55:18.717926 kubelet[2417]: I1216 12:55:18.717870 2417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5f35eb11cacaf03543b99c51360ac568-ca-certs\") pod \"kube-apiserver-ci-4515.1.0-3-ef2be4b8ba\" (UID: \"5f35eb11cacaf03543b99c51360ac568\") " pod="kube-system/kube-apiserver-ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:55:18.718285 kubelet[2417]: I1216 12:55:18.718262 2417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5f35eb11cacaf03543b99c51360ac568-k8s-certs\") pod \"kube-apiserver-ci-4515.1.0-3-ef2be4b8ba\" (UID: \"5f35eb11cacaf03543b99c51360ac568\") " pod="kube-system/kube-apiserver-ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:55:18.718549 kubelet[2417]: I1216 12:55:18.718388 2417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f6e1aba290bf0d5830d9547aef9b44bc-ca-certs\") pod \"kube-controller-manager-ci-4515.1.0-3-ef2be4b8ba\" (UID: \"f6e1aba290bf0d5830d9547aef9b44bc\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:55:18.718549 kubelet[2417]: I1216 12:55:18.718412 2417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f6e1aba290bf0d5830d9547aef9b44bc-k8s-certs\") pod \"kube-controller-manager-ci-4515.1.0-3-ef2be4b8ba\" (UID: \"f6e1aba290bf0d5830d9547aef9b44bc\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:55:18.718549 kubelet[2417]: I1216 12:55:18.718431 2417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5f35eb11cacaf03543b99c51360ac568-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4515.1.0-3-ef2be4b8ba\" (UID: \"5f35eb11cacaf03543b99c51360ac568\") " pod="kube-system/kube-apiserver-ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:55:18.718549 kubelet[2417]: I1216 12:55:18.718449 2417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f6e1aba290bf0d5830d9547aef9b44bc-flexvolume-dir\") pod \"kube-controller-manager-ci-4515.1.0-3-ef2be4b8ba\" (UID: \"f6e1aba290bf0d5830d9547aef9b44bc\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:55:18.718549 kubelet[2417]: I1216 12:55:18.718464 2417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f6e1aba290bf0d5830d9547aef9b44bc-kubeconfig\") pod \"kube-controller-manager-ci-4515.1.0-3-ef2be4b8ba\" (UID: \"f6e1aba290bf0d5830d9547aef9b44bc\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:55:18.718776 kubelet[2417]: I1216 12:55:18.718478 2417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f6e1aba290bf0d5830d9547aef9b44bc-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4515.1.0-3-ef2be4b8ba\" (UID: \"f6e1aba290bf0d5830d9547aef9b44bc\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:55:18.718776 kubelet[2417]: I1216 12:55:18.718496 2417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/87816fa8f93521a5b540cfe0bb82be90-kubeconfig\") pod \"kube-scheduler-ci-4515.1.0-3-ef2be4b8ba\" (UID: \"87816fa8f93521a5b540cfe0bb82be90\") " pod="kube-system/kube-scheduler-ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:55:18.822197 kubelet[2417]: I1216 12:55:18.821236 2417 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:55:18.822197 kubelet[2417]: E1216 12:55:18.821675 2417 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://164.90.155.252:6443/api/v1/nodes\": dial tcp 164.90.155.252:6443: connect: connection refused" node="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:55:18.882709 kubelet[2417]: E1216 12:55:18.882615 2417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 12:55:18.884043 containerd[1617]: time="2025-12-16T12:55:18.883984198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4515.1.0-3-ef2be4b8ba,Uid:5f35eb11cacaf03543b99c51360ac568,Namespace:kube-system,Attempt:0,}" Dec 16 12:55:18.890448 kubelet[2417]: E1216 12:55:18.890248 2417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 12:55:18.892130 containerd[1617]: time="2025-12-16T12:55:18.891587281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4515.1.0-3-ef2be4b8ba,Uid:f6e1aba290bf0d5830d9547aef9b44bc,Namespace:kube-system,Attempt:0,}" Dec 16 12:55:18.902353 kubelet[2417]: E1216 12:55:18.902010 2417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 12:55:18.902983 containerd[1617]: time="2025-12-16T12:55:18.902786802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4515.1.0-3-ef2be4b8ba,Uid:87816fa8f93521a5b540cfe0bb82be90,Namespace:kube-system,Attempt:0,}" Dec 16 12:55:19.033395 kubelet[2417]: E1216 12:55:19.033346 2417 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://164.90.155.252:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515.1.0-3-ef2be4b8ba?timeout=10s\": dial tcp 164.90.155.252:6443: connect: connection refused" interval="800ms" Dec 16 12:55:19.058188 containerd[1617]: time="2025-12-16T12:55:19.057879659Z" level=info msg="connecting to shim fd6f1119b46e3d239d1c946c8fb9e7b25ff4c5dcb07ef7ac85cb4fe1f4cc374d" address="unix:///run/containerd/s/c11bf76a82c6c1b932392ff63168dbee20f07f7d4d415c0c1c7f2d9de87e336d" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:55:19.065006 containerd[1617]: time="2025-12-16T12:55:19.064937024Z" level=info msg="connecting to shim 70e04c91aa263c1b4631ff8e242052ed98d37b34b1536004cd67cf1e92d74c0d" address="unix:///run/containerd/s/d9520ef8636b390032639961d7bac094b9206d29055401cd3759088564c764ec" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:55:19.066017 containerd[1617]: time="2025-12-16T12:55:19.065961363Z" level=info msg="connecting to shim 5ccb0d0de3a6adade3a5aeb904778dcd065cbe8e1abf1f121795b83d3bbbf7c5" address="unix:///run/containerd/s/23a5244a87db6ca60b175cdd716f037231625f94c0e54b4df106d1a481f7b2fe" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:55:19.180952 systemd[1]: Started cri-containerd-fd6f1119b46e3d239d1c946c8fb9e7b25ff4c5dcb07ef7ac85cb4fe1f4cc374d.scope - libcontainer container fd6f1119b46e3d239d1c946c8fb9e7b25ff4c5dcb07ef7ac85cb4fe1f4cc374d. Dec 16 12:55:19.194150 systemd[1]: Started cri-containerd-5ccb0d0de3a6adade3a5aeb904778dcd065cbe8e1abf1f121795b83d3bbbf7c5.scope - libcontainer container 5ccb0d0de3a6adade3a5aeb904778dcd065cbe8e1abf1f121795b83d3bbbf7c5. Dec 16 12:55:19.197335 systemd[1]: Started cri-containerd-70e04c91aa263c1b4631ff8e242052ed98d37b34b1536004cd67cf1e92d74c0d.scope - libcontainer container 70e04c91aa263c1b4631ff8e242052ed98d37b34b1536004cd67cf1e92d74c0d. Dec 16 12:55:19.226203 kubelet[2417]: I1216 12:55:19.226150 2417 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:55:19.227274 kubelet[2417]: E1216 12:55:19.227222 2417 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://164.90.155.252:6443/api/v1/nodes\": dial tcp 164.90.155.252:6443: connect: connection refused" node="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:55:19.232000 audit: BPF prog-id=81 op=LOAD Dec 16 12:55:19.233000 audit: BPF prog-id=82 op=LOAD Dec 16 12:55:19.233000 audit[2505]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2475 pid=2505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:19.233000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664366631313139623436653364323339643163393436633866623965 Dec 16 12:55:19.233000 audit: BPF prog-id=82 op=UNLOAD Dec 16 12:55:19.233000 audit[2505]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2475 pid=2505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:19.233000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664366631313139623436653364323339643163393436633866623965 Dec 16 12:55:19.234000 audit: BPF prog-id=83 op=LOAD Dec 16 12:55:19.234000 audit[2505]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2475 pid=2505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:19.234000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664366631313139623436653364323339643163393436633866623965 Dec 16 12:55:19.234000 audit: BPF prog-id=84 op=LOAD Dec 16 12:55:19.234000 audit[2505]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2475 pid=2505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:19.234000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664366631313139623436653364323339643163393436633866623965 Dec 16 12:55:19.234000 audit: BPF prog-id=84 op=UNLOAD Dec 16 12:55:19.234000 audit[2505]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2475 pid=2505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:19.234000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664366631313139623436653364323339643163393436633866623965 Dec 16 12:55:19.234000 audit: BPF prog-id=83 op=UNLOAD Dec 16 12:55:19.234000 audit[2505]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2475 pid=2505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:19.234000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664366631313139623436653364323339643163393436633866623965 Dec 16 12:55:19.234000 audit: BPF prog-id=85 op=LOAD Dec 16 12:55:19.234000 audit[2505]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2475 pid=2505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:19.234000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664366631313139623436653364323339643163393436633866623965 Dec 16 12:55:19.236000 audit: BPF prog-id=86 op=LOAD Dec 16 12:55:19.238000 audit: BPF prog-id=87 op=LOAD Dec 16 12:55:19.238000 audit[2511]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2479 pid=2511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:19.238000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563636230643064653361366164616465336135616562393034373738 Dec 16 12:55:19.238000 audit: BPF prog-id=87 op=UNLOAD Dec 16 12:55:19.238000 audit[2511]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2479 pid=2511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:19.238000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563636230643064653361366164616465336135616562393034373738 Dec 16 12:55:19.238000 audit: BPF prog-id=88 op=LOAD Dec 16 12:55:19.238000 audit[2511]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2479 pid=2511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:19.238000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563636230643064653361366164616465336135616562393034373738 Dec 16 12:55:19.239000 audit: BPF prog-id=89 op=LOAD Dec 16 12:55:19.239000 audit[2511]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2479 pid=2511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:19.239000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563636230643064653361366164616465336135616562393034373738 Dec 16 12:55:19.239000 audit: BPF prog-id=89 op=UNLOAD Dec 16 12:55:19.239000 audit[2511]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2479 pid=2511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:19.239000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563636230643064653361366164616465336135616562393034373738 Dec 16 12:55:19.239000 audit: BPF prog-id=88 op=UNLOAD Dec 16 12:55:19.239000 audit[2511]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2479 pid=2511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:19.239000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563636230643064653361366164616465336135616562393034373738 Dec 16 12:55:19.239000 audit: BPF prog-id=90 op=LOAD Dec 16 12:55:19.239000 audit[2511]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2479 pid=2511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:19.239000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563636230643064653361366164616465336135616562393034373738 Dec 16 12:55:19.250000 audit: BPF prog-id=91 op=LOAD Dec 16 12:55:19.252000 audit: BPF prog-id=92 op=LOAD Dec 16 12:55:19.252000 audit[2503]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=2480 pid=2503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:19.252000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730653034633931616132363363316234363331666638653234323035 Dec 16 12:55:19.252000 audit: BPF prog-id=92 op=UNLOAD Dec 16 12:55:19.252000 audit[2503]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2480 pid=2503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:19.252000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730653034633931616132363363316234363331666638653234323035 Dec 16 12:55:19.253000 audit: BPF prog-id=93 op=LOAD Dec 16 12:55:19.253000 audit[2503]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=2480 pid=2503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:19.253000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730653034633931616132363363316234363331666638653234323035 Dec 16 12:55:19.254000 audit: BPF prog-id=94 op=LOAD Dec 16 12:55:19.254000 audit[2503]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=2480 pid=2503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:19.254000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730653034633931616132363363316234363331666638653234323035 Dec 16 12:55:19.254000 audit: BPF prog-id=94 op=UNLOAD Dec 16 12:55:19.254000 audit[2503]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2480 pid=2503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:19.254000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730653034633931616132363363316234363331666638653234323035 Dec 16 12:55:19.255000 audit: BPF prog-id=93 op=UNLOAD Dec 16 12:55:19.255000 audit[2503]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2480 pid=2503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:19.255000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730653034633931616132363363316234363331666638653234323035 Dec 16 12:55:19.255000 audit: BPF prog-id=95 op=LOAD Dec 16 12:55:19.255000 audit[2503]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=2480 pid=2503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:19.255000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730653034633931616132363363316234363331666638653234323035 Dec 16 12:55:19.297773 kubelet[2417]: E1216 12:55:19.297685 2417 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://164.90.155.252:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4515.1.0-3-ef2be4b8ba&limit=500&resourceVersion=0\": dial tcp 164.90.155.252:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 12:55:19.320188 containerd[1617]: time="2025-12-16T12:55:19.320036977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4515.1.0-3-ef2be4b8ba,Uid:f6e1aba290bf0d5830d9547aef9b44bc,Namespace:kube-system,Attempt:0,} returns sandbox id \"fd6f1119b46e3d239d1c946c8fb9e7b25ff4c5dcb07ef7ac85cb4fe1f4cc374d\"" Dec 16 12:55:19.322032 kubelet[2417]: E1216 12:55:19.321982 2417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 12:55:19.333668 containerd[1617]: time="2025-12-16T12:55:19.331772357Z" level=info msg="CreateContainer within sandbox \"fd6f1119b46e3d239d1c946c8fb9e7b25ff4c5dcb07ef7ac85cb4fe1f4cc374d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 16 12:55:19.340523 containerd[1617]: time="2025-12-16T12:55:19.340472151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4515.1.0-3-ef2be4b8ba,Uid:5f35eb11cacaf03543b99c51360ac568,Namespace:kube-system,Attempt:0,} returns sandbox id \"5ccb0d0de3a6adade3a5aeb904778dcd065cbe8e1abf1f121795b83d3bbbf7c5\"" Dec 16 12:55:19.342203 kubelet[2417]: E1216 12:55:19.341933 2417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 12:55:19.349443 containerd[1617]: time="2025-12-16T12:55:19.349377604Z" level=info msg="Container b86b471fae827673016c1784cdd97d5730f204df4b653e583a8c0cf67d82e1fd: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:55:19.371680 containerd[1617]: time="2025-12-16T12:55:19.370121432Z" level=info msg="CreateContainer within sandbox \"5ccb0d0de3a6adade3a5aeb904778dcd065cbe8e1abf1f121795b83d3bbbf7c5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 16 12:55:19.372086 containerd[1617]: time="2025-12-16T12:55:19.372035031Z" level=info msg="CreateContainer within sandbox \"fd6f1119b46e3d239d1c946c8fb9e7b25ff4c5dcb07ef7ac85cb4fe1f4cc374d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b86b471fae827673016c1784cdd97d5730f204df4b653e583a8c0cf67d82e1fd\"" Dec 16 12:55:19.373870 containerd[1617]: time="2025-12-16T12:55:19.373816095Z" level=info msg="StartContainer for \"b86b471fae827673016c1784cdd97d5730f204df4b653e583a8c0cf67d82e1fd\"" Dec 16 12:55:19.377402 containerd[1617]: time="2025-12-16T12:55:19.377343686Z" level=info msg="connecting to shim b86b471fae827673016c1784cdd97d5730f204df4b653e583a8c0cf67d82e1fd" address="unix:///run/containerd/s/c11bf76a82c6c1b932392ff63168dbee20f07f7d4d415c0c1c7f2d9de87e336d" protocol=ttrpc version=3 Dec 16 12:55:19.383515 containerd[1617]: time="2025-12-16T12:55:19.381811012Z" level=info msg="Container 5cb6c64b30fe59a3868053bb5e87eeb993bb271e80eb3fb99a960c6c457bcbcc: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:55:19.388506 containerd[1617]: time="2025-12-16T12:55:19.388436385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4515.1.0-3-ef2be4b8ba,Uid:87816fa8f93521a5b540cfe0bb82be90,Namespace:kube-system,Attempt:0,} returns sandbox id \"70e04c91aa263c1b4631ff8e242052ed98d37b34b1536004cd67cf1e92d74c0d\"" Dec 16 12:55:19.389652 kubelet[2417]: E1216 12:55:19.389492 2417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 12:55:19.395399 containerd[1617]: time="2025-12-16T12:55:19.394546959Z" level=info msg="CreateContainer within sandbox \"70e04c91aa263c1b4631ff8e242052ed98d37b34b1536004cd67cf1e92d74c0d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 16 12:55:19.395676 containerd[1617]: time="2025-12-16T12:55:19.395606141Z" level=info msg="CreateContainer within sandbox \"5ccb0d0de3a6adade3a5aeb904778dcd065cbe8e1abf1f121795b83d3bbbf7c5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5cb6c64b30fe59a3868053bb5e87eeb993bb271e80eb3fb99a960c6c457bcbcc\"" Dec 16 12:55:19.396903 containerd[1617]: time="2025-12-16T12:55:19.396747771Z" level=info msg="StartContainer for \"5cb6c64b30fe59a3868053bb5e87eeb993bb271e80eb3fb99a960c6c457bcbcc\"" Dec 16 12:55:19.398983 containerd[1617]: time="2025-12-16T12:55:19.398920954Z" level=info msg="connecting to shim 5cb6c64b30fe59a3868053bb5e87eeb993bb271e80eb3fb99a960c6c457bcbcc" address="unix:///run/containerd/s/23a5244a87db6ca60b175cdd716f037231625f94c0e54b4df106d1a481f7b2fe" protocol=ttrpc version=3 Dec 16 12:55:19.406546 containerd[1617]: time="2025-12-16T12:55:19.406481478Z" level=info msg="Container 6574b57468105dfc87eb0712cac48191b08b9c6e2572af38f324a22129d72784: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:55:19.419478 containerd[1617]: time="2025-12-16T12:55:19.419395099Z" level=info msg="CreateContainer within sandbox \"70e04c91aa263c1b4631ff8e242052ed98d37b34b1536004cd67cf1e92d74c0d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6574b57468105dfc87eb0712cac48191b08b9c6e2572af38f324a22129d72784\"" Dec 16 12:55:19.423275 containerd[1617]: time="2025-12-16T12:55:19.422029171Z" level=info msg="StartContainer for \"6574b57468105dfc87eb0712cac48191b08b9c6e2572af38f324a22129d72784\"" Dec 16 12:55:19.425176 containerd[1617]: time="2025-12-16T12:55:19.425136420Z" level=info msg="connecting to shim 6574b57468105dfc87eb0712cac48191b08b9c6e2572af38f324a22129d72784" address="unix:///run/containerd/s/d9520ef8636b390032639961d7bac094b9206d29055401cd3759088564c764ec" protocol=ttrpc version=3 Dec 16 12:55:19.427182 systemd[1]: Started cri-containerd-b86b471fae827673016c1784cdd97d5730f204df4b653e583a8c0cf67d82e1fd.scope - libcontainer container b86b471fae827673016c1784cdd97d5730f204df4b653e583a8c0cf67d82e1fd. Dec 16 12:55:19.442939 systemd[1]: Started cri-containerd-5cb6c64b30fe59a3868053bb5e87eeb993bb271e80eb3fb99a960c6c457bcbcc.scope - libcontainer container 5cb6c64b30fe59a3868053bb5e87eeb993bb271e80eb3fb99a960c6c457bcbcc. Dec 16 12:55:19.462815 kubelet[2417]: E1216 12:55:19.462397 2417 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://164.90.155.252:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 164.90.155.252:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 12:55:19.472211 systemd[1]: Started cri-containerd-6574b57468105dfc87eb0712cac48191b08b9c6e2572af38f324a22129d72784.scope - libcontainer container 6574b57468105dfc87eb0712cac48191b08b9c6e2572af38f324a22129d72784. Dec 16 12:55:19.487000 audit: BPF prog-id=96 op=LOAD Dec 16 12:55:19.489000 audit: BPF prog-id=97 op=LOAD Dec 16 12:55:19.489000 audit[2589]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2475 pid=2589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:19.489000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238366234373166616538323736373330313663313738346364643937 Dec 16 12:55:19.489000 audit: BPF prog-id=97 op=UNLOAD Dec 16 12:55:19.489000 audit[2589]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2475 pid=2589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:19.489000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238366234373166616538323736373330313663313738346364643937 Dec 16 12:55:19.492000 audit: BPF prog-id=98 op=LOAD Dec 16 12:55:19.492000 audit[2589]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2475 pid=2589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:19.492000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238366234373166616538323736373330313663313738346364643937 Dec 16 12:55:19.492000 audit: BPF prog-id=99 op=LOAD Dec 16 12:55:19.492000 audit[2589]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2475 pid=2589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:19.492000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238366234373166616538323736373330313663313738346364643937 Dec 16 12:55:19.493000 audit: BPF prog-id=99 op=UNLOAD Dec 16 12:55:19.493000 audit[2589]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2475 pid=2589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:19.493000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238366234373166616538323736373330313663313738346364643937 Dec 16 12:55:19.493000 audit: BPF prog-id=98 op=UNLOAD Dec 16 12:55:19.493000 audit[2589]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2475 pid=2589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:19.493000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238366234373166616538323736373330313663313738346364643937 Dec 16 12:55:19.494000 audit: BPF prog-id=100 op=LOAD Dec 16 12:55:19.493000 audit: BPF prog-id=101 op=LOAD Dec 16 12:55:19.493000 audit[2589]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2475 pid=2589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:19.493000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238366234373166616538323736373330313663313738346364643937 Dec 16 12:55:19.499000 audit: BPF prog-id=102 op=LOAD Dec 16 12:55:19.499000 audit[2598]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=2479 pid=2598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:19.499000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563623663363462333066653539613338363830353362623565383765 Dec 16 12:55:19.500000 audit: BPF prog-id=102 op=UNLOAD Dec 16 12:55:19.500000 audit[2598]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2479 pid=2598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:19.500000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563623663363462333066653539613338363830353362623565383765 Dec 16 12:55:19.501000 audit: BPF prog-id=103 op=LOAD Dec 16 12:55:19.501000 audit[2598]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=2479 pid=2598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:19.501000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563623663363462333066653539613338363830353362623565383765 Dec 16 12:55:19.504000 audit: BPF prog-id=104 op=LOAD Dec 16 12:55:19.504000 audit: BPF prog-id=105 op=LOAD Dec 16 12:55:19.504000 audit[2598]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=2479 pid=2598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:19.504000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563623663363462333066653539613338363830353362623565383765 Dec 16 12:55:19.505000 audit: BPF prog-id=105 op=UNLOAD Dec 16 12:55:19.505000 audit[2598]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2479 pid=2598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:19.505000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563623663363462333066653539613338363830353362623565383765 Dec 16 12:55:19.505000 audit: BPF prog-id=106 op=LOAD Dec 16 12:55:19.505000 audit[2615]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=2480 pid=2615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:19.505000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635373462353734363831303564666338376562303731326361633438 Dec 16 12:55:19.505000 audit: BPF prog-id=103 op=UNLOAD Dec 16 12:55:19.505000 audit: BPF prog-id=106 op=UNLOAD Dec 16 12:55:19.505000 audit[2615]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2480 pid=2615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:19.505000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635373462353734363831303564666338376562303731326361633438 Dec 16 12:55:19.505000 audit[2598]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2479 pid=2598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:19.505000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563623663363462333066653539613338363830353362623565383765 Dec 16 12:55:19.505000 audit: BPF prog-id=107 op=LOAD Dec 16 12:55:19.505000 audit[2615]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=2480 pid=2615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:19.505000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635373462353734363831303564666338376562303731326361633438 Dec 16 12:55:19.506000 audit: BPF prog-id=108 op=LOAD Dec 16 12:55:19.506000 audit[2615]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=2480 pid=2615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:19.506000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635373462353734363831303564666338376562303731326361633438 Dec 16 12:55:19.506000 audit: BPF prog-id=108 op=UNLOAD Dec 16 12:55:19.506000 audit[2615]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2480 pid=2615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:19.506000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635373462353734363831303564666338376562303731326361633438 Dec 16 12:55:19.506000 audit: BPF prog-id=107 op=UNLOAD Dec 16 12:55:19.506000 audit[2615]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2480 pid=2615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:19.506000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635373462353734363831303564666338376562303731326361633438 Dec 16 12:55:19.506000 audit: BPF prog-id=109 op=LOAD Dec 16 12:55:19.506000 audit[2615]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=2480 pid=2615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:19.506000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635373462353734363831303564666338376562303731326361633438 Dec 16 12:55:19.505000 audit: BPF prog-id=110 op=LOAD Dec 16 12:55:19.505000 audit[2598]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=2479 pid=2598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:19.505000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563623663363462333066653539613338363830353362623565383765 Dec 16 12:55:19.579052 containerd[1617]: time="2025-12-16T12:55:19.578853697Z" level=info msg="StartContainer for \"b86b471fae827673016c1784cdd97d5730f204df4b653e583a8c0cf67d82e1fd\" returns successfully" Dec 16 12:55:19.593173 containerd[1617]: time="2025-12-16T12:55:19.593106778Z" level=info msg="StartContainer for \"5cb6c64b30fe59a3868053bb5e87eeb993bb271e80eb3fb99a960c6c457bcbcc\" returns successfully" Dec 16 12:55:19.612556 containerd[1617]: time="2025-12-16T12:55:19.612519072Z" level=info msg="StartContainer for \"6574b57468105dfc87eb0712cac48191b08b9c6e2572af38f324a22129d72784\" returns successfully" Dec 16 12:55:19.734803 kubelet[2417]: E1216 12:55:19.734737 2417 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://164.90.155.252:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 164.90.155.252:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 12:55:19.795870 kubelet[2417]: E1216 12:55:19.795807 2417 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://164.90.155.252:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 164.90.155.252:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 12:55:19.837836 kubelet[2417]: E1216 12:55:19.837711 2417 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://164.90.155.252:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515.1.0-3-ef2be4b8ba?timeout=10s\": dial tcp 164.90.155.252:6443: connect: connection refused" interval="1.6s" Dec 16 12:55:20.029828 kubelet[2417]: I1216 12:55:20.029132 2417 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:55:20.519667 kubelet[2417]: E1216 12:55:20.518236 2417 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-3-ef2be4b8ba\" not found" node="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:55:20.520960 kubelet[2417]: E1216 12:55:20.520925 2417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 12:55:20.522661 kubelet[2417]: E1216 12:55:20.522592 2417 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-3-ef2be4b8ba\" not found" node="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:55:20.522930 kubelet[2417]: E1216 12:55:20.522914 2417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 12:55:20.526367 kubelet[2417]: E1216 12:55:20.526321 2417 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-3-ef2be4b8ba\" not found" node="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:55:20.526720 kubelet[2417]: E1216 12:55:20.526683 2417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 12:55:21.532953 kubelet[2417]: E1216 12:55:21.532531 2417 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-3-ef2be4b8ba\" not found" node="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:55:21.532953 kubelet[2417]: E1216 12:55:21.532828 2417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 12:55:21.533599 kubelet[2417]: E1216 12:55:21.533581 2417 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-3-ef2be4b8ba\" not found" node="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:55:21.533969 kubelet[2417]: E1216 12:55:21.533955 2417 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-3-ef2be4b8ba\" not found" node="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:55:21.534170 kubelet[2417]: E1216 12:55:21.534158 2417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 12:55:21.534404 kubelet[2417]: E1216 12:55:21.534262 2417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 12:55:22.532682 kubelet[2417]: E1216 12:55:22.532614 2417 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-3-ef2be4b8ba\" not found" node="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:55:22.533546 kubelet[2417]: E1216 12:55:22.533507 2417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 12:55:22.536334 kubelet[2417]: E1216 12:55:22.536283 2417 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-3-ef2be4b8ba\" not found" node="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:55:22.536659 kubelet[2417]: E1216 12:55:22.536599 2417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 12:55:22.867007 kubelet[2417]: I1216 12:55:22.866543 2417 kubelet_node_status.go:78] "Successfully registered node" node="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:55:22.867007 kubelet[2417]: E1216 12:55:22.866593 2417 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4515.1.0-3-ef2be4b8ba\": node \"ci-4515.1.0-3-ef2be4b8ba\" not found" Dec 16 12:55:22.918752 kubelet[2417]: I1216 12:55:22.918700 2417 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:55:22.933194 kubelet[2417]: E1216 12:55:22.933136 2417 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4515.1.0-3-ef2be4b8ba\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:55:22.933530 kubelet[2417]: I1216 12:55:22.933396 2417 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:55:22.941104 kubelet[2417]: E1216 12:55:22.941047 2417 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4515.1.0-3-ef2be4b8ba\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:55:22.941104 kubelet[2417]: I1216 12:55:22.941094 2417 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:55:22.945288 kubelet[2417]: E1216 12:55:22.945234 2417 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4515.1.0-3-ef2be4b8ba\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:55:23.387971 kubelet[2417]: I1216 12:55:23.387340 2417 apiserver.go:52] "Watching apiserver" Dec 16 12:55:23.418051 kubelet[2417]: I1216 12:55:23.417984 2417 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 12:55:24.123461 kubelet[2417]: I1216 12:55:24.123213 2417 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:55:24.136708 kubelet[2417]: I1216 12:55:24.136506 2417 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 12:55:24.138055 kubelet[2417]: E1216 12:55:24.137922 2417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 12:55:24.419923 kubelet[2417]: I1216 12:55:24.419674 2417 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:55:24.431661 kubelet[2417]: I1216 12:55:24.430116 2417 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 12:55:24.431661 kubelet[2417]: E1216 12:55:24.430533 2417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 12:55:24.534542 kubelet[2417]: E1216 12:55:24.534499 2417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 12:55:24.535160 kubelet[2417]: E1216 12:55:24.535128 2417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 12:55:25.366990 systemd[1]: Reload requested from client PID 2692 ('systemctl') (unit session-7.scope)... Dec 16 12:55:25.367360 systemd[1]: Reloading... Dec 16 12:55:25.560668 zram_generator::config[2741]: No configuration found. Dec 16 12:55:25.881140 systemd[1]: Reloading finished in 512 ms. Dec 16 12:55:25.915678 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:55:25.929200 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 12:55:25.929814 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:55:25.932729 kernel: kauditd_printk_skb: 202 callbacks suppressed Dec 16 12:55:25.932907 kernel: audit: type=1131 audit(1765889725.929:391): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:55:25.929000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:55:25.934087 systemd[1]: kubelet.service: Consumed 1.519s CPU time, 127.6M memory peak. Dec 16 12:55:25.939192 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:55:25.943756 kernel: audit: type=1334 audit(1765889725.939:392): prog-id=111 op=LOAD Dec 16 12:55:25.943904 kernel: audit: type=1334 audit(1765889725.939:393): prog-id=67 op=UNLOAD Dec 16 12:55:25.939000 audit: BPF prog-id=111 op=LOAD Dec 16 12:55:25.939000 audit: BPF prog-id=67 op=UNLOAD Dec 16 12:55:25.939000 audit: BPF prog-id=112 op=LOAD Dec 16 12:55:25.945711 kernel: audit: type=1334 audit(1765889725.939:394): prog-id=112 op=LOAD Dec 16 12:55:25.939000 audit: BPF prog-id=113 op=LOAD Dec 16 12:55:25.949573 kernel: audit: type=1334 audit(1765889725.939:395): prog-id=113 op=LOAD Dec 16 12:55:25.949681 kernel: audit: type=1334 audit(1765889725.939:396): prog-id=68 op=UNLOAD Dec 16 12:55:25.939000 audit: BPF prog-id=68 op=UNLOAD Dec 16 12:55:25.939000 audit: BPF prog-id=69 op=UNLOAD Dec 16 12:55:25.951767 kernel: audit: type=1334 audit(1765889725.939:397): prog-id=69 op=UNLOAD Dec 16 12:55:25.953665 kernel: audit: type=1334 audit(1765889725.940:398): prog-id=114 op=LOAD Dec 16 12:55:25.940000 audit: BPF prog-id=114 op=LOAD Dec 16 12:55:25.940000 audit: BPF prog-id=77 op=UNLOAD Dec 16 12:55:25.955718 kernel: audit: type=1334 audit(1765889725.940:399): prog-id=77 op=UNLOAD Dec 16 12:55:25.941000 audit: BPF prog-id=115 op=LOAD Dec 16 12:55:25.941000 audit: BPF prog-id=78 op=UNLOAD Dec 16 12:55:25.941000 audit: BPF prog-id=116 op=LOAD Dec 16 12:55:25.941000 audit: BPF prog-id=117 op=LOAD Dec 16 12:55:25.941000 audit: BPF prog-id=79 op=UNLOAD Dec 16 12:55:25.941000 audit: BPF prog-id=80 op=UNLOAD Dec 16 12:55:25.943000 audit: BPF prog-id=118 op=LOAD Dec 16 12:55:25.943000 audit: BPF prog-id=66 op=UNLOAD Dec 16 12:55:25.944000 audit: BPF prog-id=119 op=LOAD Dec 16 12:55:25.944000 audit: BPF prog-id=63 op=UNLOAD Dec 16 12:55:25.944000 audit: BPF prog-id=120 op=LOAD Dec 16 12:55:25.957674 kernel: audit: type=1334 audit(1765889725.941:400): prog-id=115 op=LOAD Dec 16 12:55:25.944000 audit: BPF prog-id=121 op=LOAD Dec 16 12:55:25.944000 audit: BPF prog-id=64 op=UNLOAD Dec 16 12:55:25.944000 audit: BPF prog-id=65 op=UNLOAD Dec 16 12:55:25.946000 audit: BPF prog-id=122 op=LOAD Dec 16 12:55:25.946000 audit: BPF prog-id=71 op=UNLOAD Dec 16 12:55:25.946000 audit: BPF prog-id=123 op=LOAD Dec 16 12:55:25.946000 audit: BPF prog-id=124 op=LOAD Dec 16 12:55:25.946000 audit: BPF prog-id=72 op=UNLOAD Dec 16 12:55:25.946000 audit: BPF prog-id=73 op=UNLOAD Dec 16 12:55:25.948000 audit: BPF prog-id=125 op=LOAD Dec 16 12:55:25.948000 audit: BPF prog-id=74 op=UNLOAD Dec 16 12:55:25.948000 audit: BPF prog-id=126 op=LOAD Dec 16 12:55:25.948000 audit: BPF prog-id=127 op=LOAD Dec 16 12:55:25.948000 audit: BPF prog-id=75 op=UNLOAD Dec 16 12:55:25.948000 audit: BPF prog-id=76 op=UNLOAD Dec 16 12:55:25.948000 audit: BPF prog-id=128 op=LOAD Dec 16 12:55:25.948000 audit: BPF prog-id=129 op=LOAD Dec 16 12:55:25.948000 audit: BPF prog-id=61 op=UNLOAD Dec 16 12:55:25.948000 audit: BPF prog-id=62 op=UNLOAD Dec 16 12:55:25.949000 audit: BPF prog-id=130 op=LOAD Dec 16 12:55:25.949000 audit: BPF prog-id=70 op=UNLOAD Dec 16 12:55:26.149026 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:55:26.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:55:26.164297 (kubelet)[2789]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 12:55:26.251098 kubelet[2789]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:55:26.251098 kubelet[2789]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 12:55:26.251098 kubelet[2789]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:55:26.251617 kubelet[2789]: I1216 12:55:26.251192 2789 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 12:55:26.263692 kubelet[2789]: I1216 12:55:26.263434 2789 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 16 12:55:26.263692 kubelet[2789]: I1216 12:55:26.263486 2789 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 12:55:26.264661 kubelet[2789]: I1216 12:55:26.264324 2789 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 12:55:26.268459 kubelet[2789]: I1216 12:55:26.268405 2789 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 16 12:55:26.281966 kubelet[2789]: I1216 12:55:26.281923 2789 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 12:55:26.290045 kubelet[2789]: I1216 12:55:26.289984 2789 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 12:55:26.296435 kubelet[2789]: I1216 12:55:26.296392 2789 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 12:55:26.296755 kubelet[2789]: I1216 12:55:26.296715 2789 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 12:55:26.297019 kubelet[2789]: I1216 12:55:26.296763 2789 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4515.1.0-3-ef2be4b8ba","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 12:55:26.297168 kubelet[2789]: I1216 12:55:26.297031 2789 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 12:55:26.297168 kubelet[2789]: I1216 12:55:26.297045 2789 container_manager_linux.go:303] "Creating device plugin manager" Dec 16 12:55:26.297168 kubelet[2789]: I1216 12:55:26.297104 2789 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:55:26.297775 kubelet[2789]: I1216 12:55:26.297316 2789 kubelet.go:480] "Attempting to sync node with API server" Dec 16 12:55:26.297775 kubelet[2789]: I1216 12:55:26.297342 2789 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 12:55:26.297775 kubelet[2789]: I1216 12:55:26.297379 2789 kubelet.go:386] "Adding apiserver pod source" Dec 16 12:55:26.297775 kubelet[2789]: I1216 12:55:26.297397 2789 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 12:55:26.300860 kubelet[2789]: I1216 12:55:26.300823 2789 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 16 12:55:26.301950 kubelet[2789]: I1216 12:55:26.301608 2789 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 12:55:26.309785 kubelet[2789]: I1216 12:55:26.309692 2789 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 12:55:26.310032 kubelet[2789]: I1216 12:55:26.310011 2789 server.go:1289] "Started kubelet" Dec 16 12:55:26.311658 kubelet[2789]: I1216 12:55:26.311203 2789 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 12:55:26.314117 kubelet[2789]: I1216 12:55:26.312737 2789 server.go:317] "Adding debug handlers to kubelet server" Dec 16 12:55:26.314117 kubelet[2789]: I1216 12:55:26.312608 2789 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 12:55:26.316122 kubelet[2789]: I1216 12:55:26.315283 2789 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 12:55:26.330963 kubelet[2789]: I1216 12:55:26.329692 2789 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 12:55:26.340675 kubelet[2789]: I1216 12:55:26.338870 2789 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 12:55:26.344182 kubelet[2789]: I1216 12:55:26.344127 2789 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 12:55:26.345535 kubelet[2789]: E1216 12:55:26.345478 2789 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-3-ef2be4b8ba\" not found" Dec 16 12:55:26.347845 kubelet[2789]: I1216 12:55:26.347797 2789 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 12:55:26.348481 kubelet[2789]: I1216 12:55:26.348253 2789 reconciler.go:26] "Reconciler: start to sync state" Dec 16 12:55:26.360024 kubelet[2789]: I1216 12:55:26.359034 2789 factory.go:223] Registration of the systemd container factory successfully Dec 16 12:55:26.360024 kubelet[2789]: I1216 12:55:26.359170 2789 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 12:55:26.373032 kubelet[2789]: I1216 12:55:26.371955 2789 factory.go:223] Registration of the containerd container factory successfully Dec 16 12:55:26.408302 kubelet[2789]: I1216 12:55:26.407954 2789 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 16 12:55:26.415728 kubelet[2789]: I1216 12:55:26.415689 2789 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 16 12:55:26.415959 kubelet[2789]: I1216 12:55:26.415946 2789 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 16 12:55:26.416090 kubelet[2789]: I1216 12:55:26.416077 2789 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 12:55:26.416162 kubelet[2789]: I1216 12:55:26.416154 2789 kubelet.go:2436] "Starting kubelet main sync loop" Dec 16 12:55:26.416299 kubelet[2789]: E1216 12:55:26.416262 2789 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 12:55:26.475937 kubelet[2789]: I1216 12:55:26.474590 2789 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 12:55:26.475937 kubelet[2789]: I1216 12:55:26.474620 2789 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 12:55:26.475937 kubelet[2789]: I1216 12:55:26.474729 2789 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:55:26.475937 kubelet[2789]: I1216 12:55:26.475018 2789 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 16 12:55:26.475937 kubelet[2789]: I1216 12:55:26.475041 2789 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 16 12:55:26.475937 kubelet[2789]: I1216 12:55:26.475090 2789 policy_none.go:49] "None policy: Start" Dec 16 12:55:26.475937 kubelet[2789]: I1216 12:55:26.475109 2789 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 12:55:26.475937 kubelet[2789]: I1216 12:55:26.475129 2789 state_mem.go:35] "Initializing new in-memory state store" Dec 16 12:55:26.475937 kubelet[2789]: I1216 12:55:26.475305 2789 state_mem.go:75] "Updated machine memory state" Dec 16 12:55:26.487186 kubelet[2789]: E1216 12:55:26.487148 2789 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 12:55:26.488621 kubelet[2789]: I1216 12:55:26.488094 2789 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 12:55:26.488621 kubelet[2789]: I1216 12:55:26.488116 2789 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 12:55:26.490032 kubelet[2789]: I1216 12:55:26.489907 2789 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 12:55:26.498278 kubelet[2789]: E1216 12:55:26.498238 2789 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 12:55:26.520048 kubelet[2789]: I1216 12:55:26.519979 2789 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:55:26.526132 kubelet[2789]: I1216 12:55:26.525936 2789 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:55:26.526692 kubelet[2789]: I1216 12:55:26.526618 2789 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:55:26.551496 kubelet[2789]: I1216 12:55:26.550870 2789 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 12:55:26.551496 kubelet[2789]: E1216 12:55:26.551044 2789 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4515.1.0-3-ef2be4b8ba\" already exists" pod="kube-system/kube-apiserver-ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:55:26.551496 kubelet[2789]: I1216 12:55:26.551172 2789 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 12:55:26.551795 kubelet[2789]: I1216 12:55:26.551535 2789 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 12:55:26.551795 kubelet[2789]: E1216 12:55:26.551578 2789 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4515.1.0-3-ef2be4b8ba\" already exists" pod="kube-system/kube-controller-manager-ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:55:26.608056 kubelet[2789]: I1216 12:55:26.608001 2789 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:55:26.619802 kubelet[2789]: I1216 12:55:26.619463 2789 kubelet_node_status.go:124] "Node was previously registered" node="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:55:26.619802 kubelet[2789]: I1216 12:55:26.619607 2789 kubelet_node_status.go:78] "Successfully registered node" node="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:55:26.651610 kubelet[2789]: I1216 12:55:26.651541 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f6e1aba290bf0d5830d9547aef9b44bc-kubeconfig\") pod \"kube-controller-manager-ci-4515.1.0-3-ef2be4b8ba\" (UID: \"f6e1aba290bf0d5830d9547aef9b44bc\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:55:26.651610 kubelet[2789]: I1216 12:55:26.651595 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f6e1aba290bf0d5830d9547aef9b44bc-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4515.1.0-3-ef2be4b8ba\" (UID: \"f6e1aba290bf0d5830d9547aef9b44bc\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:55:26.651901 kubelet[2789]: I1216 12:55:26.651622 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5f35eb11cacaf03543b99c51360ac568-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4515.1.0-3-ef2be4b8ba\" (UID: \"5f35eb11cacaf03543b99c51360ac568\") " pod="kube-system/kube-apiserver-ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:55:26.651901 kubelet[2789]: I1216 12:55:26.651657 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/87816fa8f93521a5b540cfe0bb82be90-kubeconfig\") pod \"kube-scheduler-ci-4515.1.0-3-ef2be4b8ba\" (UID: \"87816fa8f93521a5b540cfe0bb82be90\") " pod="kube-system/kube-scheduler-ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:55:26.651901 kubelet[2789]: I1216 12:55:26.651680 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5f35eb11cacaf03543b99c51360ac568-ca-certs\") pod \"kube-apiserver-ci-4515.1.0-3-ef2be4b8ba\" (UID: \"5f35eb11cacaf03543b99c51360ac568\") " pod="kube-system/kube-apiserver-ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:55:26.651901 kubelet[2789]: I1216 12:55:26.651697 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5f35eb11cacaf03543b99c51360ac568-k8s-certs\") pod \"kube-apiserver-ci-4515.1.0-3-ef2be4b8ba\" (UID: \"5f35eb11cacaf03543b99c51360ac568\") " pod="kube-system/kube-apiserver-ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:55:26.651901 kubelet[2789]: I1216 12:55:26.651720 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f6e1aba290bf0d5830d9547aef9b44bc-ca-certs\") pod \"kube-controller-manager-ci-4515.1.0-3-ef2be4b8ba\" (UID: \"f6e1aba290bf0d5830d9547aef9b44bc\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:55:26.652036 kubelet[2789]: I1216 12:55:26.651735 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f6e1aba290bf0d5830d9547aef9b44bc-flexvolume-dir\") pod \"kube-controller-manager-ci-4515.1.0-3-ef2be4b8ba\" (UID: \"f6e1aba290bf0d5830d9547aef9b44bc\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:55:26.652036 kubelet[2789]: I1216 12:55:26.651751 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f6e1aba290bf0d5830d9547aef9b44bc-k8s-certs\") pod \"kube-controller-manager-ci-4515.1.0-3-ef2be4b8ba\" (UID: \"f6e1aba290bf0d5830d9547aef9b44bc\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:55:26.852671 kubelet[2789]: E1216 12:55:26.852364 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 12:55:26.853658 kubelet[2789]: E1216 12:55:26.853598 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 12:55:26.853786 kubelet[2789]: E1216 12:55:26.853060 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 12:55:27.303787 kubelet[2789]: I1216 12:55:27.303665 2789 apiserver.go:52] "Watching apiserver" Dec 16 12:55:27.348285 kubelet[2789]: I1216 12:55:27.348198 2789 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 12:55:27.457861 kubelet[2789]: I1216 12:55:27.457624 2789 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:55:27.458114 kubelet[2789]: E1216 12:55:27.458095 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 12:55:27.459729 kubelet[2789]: E1216 12:55:27.459690 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 12:55:27.465611 kubelet[2789]: I1216 12:55:27.465571 2789 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 12:55:27.465996 kubelet[2789]: E1216 12:55:27.465953 2789 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4515.1.0-3-ef2be4b8ba\" already exists" pod="kube-system/kube-apiserver-ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:55:27.466389 kubelet[2789]: E1216 12:55:27.466355 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 12:55:27.550225 kubelet[2789]: I1216 12:55:27.549992 2789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4515.1.0-3-ef2be4b8ba" podStartSLOduration=3.549967994 podStartE2EDuration="3.549967994s" podCreationTimestamp="2025-12-16 12:55:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:55:27.532904759 +0000 UTC m=+1.358374185" watchObservedRunningTime="2025-12-16 12:55:27.549967994 +0000 UTC m=+1.375437388" Dec 16 12:55:27.582388 kubelet[2789]: I1216 12:55:27.581977 2789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4515.1.0-3-ef2be4b8ba" podStartSLOduration=3.581954257 podStartE2EDuration="3.581954257s" podCreationTimestamp="2025-12-16 12:55:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:55:27.551127927 +0000 UTC m=+1.376597367" watchObservedRunningTime="2025-12-16 12:55:27.581954257 +0000 UTC m=+1.407423662" Dec 16 12:55:27.582388 kubelet[2789]: I1216 12:55:27.582130 2789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4515.1.0-3-ef2be4b8ba" podStartSLOduration=1.582121962 podStartE2EDuration="1.582121962s" podCreationTimestamp="2025-12-16 12:55:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:55:27.58171291 +0000 UTC m=+1.407182324" watchObservedRunningTime="2025-12-16 12:55:27.582121962 +0000 UTC m=+1.407591378" Dec 16 12:55:28.460448 kubelet[2789]: E1216 12:55:28.460410 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 12:55:28.461249 kubelet[2789]: E1216 12:55:28.461222 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 12:55:29.204286 systemd-resolved[1294]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. Dec 16 12:55:29.267100 systemd-timesyncd[1481]: Contacted time server 216.144.228.179:123 (2.flatcar.pool.ntp.org). Dec 16 12:55:29.267188 systemd-timesyncd[1481]: Initial clock synchronization to Tue 2025-12-16 12:55:29.208203 UTC. Dec 16 12:55:29.738213 kubelet[2789]: I1216 12:55:29.738180 2789 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 16 12:55:29.740824 containerd[1617]: time="2025-12-16T12:55:29.740297092Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 12:55:29.742055 kubelet[2789]: I1216 12:55:29.741628 2789 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 16 12:55:30.055970 kubelet[2789]: E1216 12:55:30.055656 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 12:55:30.463210 kubelet[2789]: E1216 12:55:30.463174 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 12:55:30.818175 systemd[1]: Created slice kubepods-besteffort-pod107826fa_5f79_4e3f_adda_b03eba65230e.slice - libcontainer container kubepods-besteffort-pod107826fa_5f79_4e3f_adda_b03eba65230e.slice. Dec 16 12:55:30.878652 kubelet[2789]: I1216 12:55:30.878548 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/107826fa-5f79-4e3f-adda-b03eba65230e-lib-modules\") pod \"kube-proxy-z9vdx\" (UID: \"107826fa-5f79-4e3f-adda-b03eba65230e\") " pod="kube-system/kube-proxy-z9vdx" Dec 16 12:55:30.879790 kubelet[2789]: I1216 12:55:30.878670 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7l9b\" (UniqueName: \"kubernetes.io/projected/107826fa-5f79-4e3f-adda-b03eba65230e-kube-api-access-w7l9b\") pod \"kube-proxy-z9vdx\" (UID: \"107826fa-5f79-4e3f-adda-b03eba65230e\") " pod="kube-system/kube-proxy-z9vdx" Dec 16 12:55:30.879790 kubelet[2789]: I1216 12:55:30.878696 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/107826fa-5f79-4e3f-adda-b03eba65230e-xtables-lock\") pod \"kube-proxy-z9vdx\" (UID: \"107826fa-5f79-4e3f-adda-b03eba65230e\") " pod="kube-system/kube-proxy-z9vdx" Dec 16 12:55:30.879790 kubelet[2789]: I1216 12:55:30.878747 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/107826fa-5f79-4e3f-adda-b03eba65230e-kube-proxy\") pod \"kube-proxy-z9vdx\" (UID: \"107826fa-5f79-4e3f-adda-b03eba65230e\") " pod="kube-system/kube-proxy-z9vdx" Dec 16 12:55:30.944379 systemd[1]: Created slice kubepods-besteffort-pod1ac7c11d_8bc0_4682_91a6_75cdd28eee69.slice - libcontainer container kubepods-besteffort-pod1ac7c11d_8bc0_4682_91a6_75cdd28eee69.slice. Dec 16 12:55:30.979499 kubelet[2789]: I1216 12:55:30.979438 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1ac7c11d-8bc0-4682-91a6-75cdd28eee69-var-lib-calico\") pod \"tigera-operator-7dcd859c48-w82cz\" (UID: \"1ac7c11d-8bc0-4682-91a6-75cdd28eee69\") " pod="tigera-operator/tigera-operator-7dcd859c48-w82cz" Dec 16 12:55:30.979743 kubelet[2789]: I1216 12:55:30.979577 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wv5rv\" (UniqueName: \"kubernetes.io/projected/1ac7c11d-8bc0-4682-91a6-75cdd28eee69-kube-api-access-wv5rv\") pod \"tigera-operator-7dcd859c48-w82cz\" (UID: \"1ac7c11d-8bc0-4682-91a6-75cdd28eee69\") " pod="tigera-operator/tigera-operator-7dcd859c48-w82cz" Dec 16 12:55:31.132624 kubelet[2789]: E1216 12:55:31.132134 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 12:55:31.134245 containerd[1617]: time="2025-12-16T12:55:31.134201302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z9vdx,Uid:107826fa-5f79-4e3f-adda-b03eba65230e,Namespace:kube-system,Attempt:0,}" Dec 16 12:55:31.160072 containerd[1617]: time="2025-12-16T12:55:31.159886506Z" level=info msg="connecting to shim a588f51f54af72f6dabe1b5133baf8f36a6d68f9f4475525f8c8f6378d211216" address="unix:///run/containerd/s/81f7532bf5270871ade871df18ade075131c8633bf6a66e58c53a86604f6a588" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:55:31.202027 systemd[1]: Started cri-containerd-a588f51f54af72f6dabe1b5133baf8f36a6d68f9f4475525f8c8f6378d211216.scope - libcontainer container a588f51f54af72f6dabe1b5133baf8f36a6d68f9f4475525f8c8f6378d211216. Dec 16 12:55:31.218000 audit: BPF prog-id=131 op=LOAD Dec 16 12:55:31.219778 kernel: kauditd_printk_skb: 32 callbacks suppressed Dec 16 12:55:31.219856 kernel: audit: type=1334 audit(1765889731.218:433): prog-id=131 op=LOAD Dec 16 12:55:31.222000 audit: BPF prog-id=132 op=LOAD Dec 16 12:55:31.222000 audit[2859]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=2847 pid=2859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:31.227726 kernel: audit: type=1334 audit(1765889731.222:434): prog-id=132 op=LOAD Dec 16 12:55:31.228012 kernel: audit: type=1300 audit(1765889731.222:434): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=2847 pid=2859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:31.222000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135383866353166353461663732663664616265316235313333626166 Dec 16 12:55:31.232357 kernel: audit: type=1327 audit(1765889731.222:434): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135383866353166353461663732663664616265316235313333626166 Dec 16 12:55:31.222000 audit: BPF prog-id=132 op=UNLOAD Dec 16 12:55:31.222000 audit[2859]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2847 pid=2859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:31.239168 kernel: audit: type=1334 audit(1765889731.222:435): prog-id=132 op=UNLOAD Dec 16 12:55:31.239228 kernel: audit: type=1300 audit(1765889731.222:435): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2847 pid=2859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:31.222000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135383866353166353461663732663664616265316235313333626166 Dec 16 12:55:31.222000 audit: BPF prog-id=133 op=LOAD Dec 16 12:55:31.247040 kernel: audit: type=1327 audit(1765889731.222:435): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135383866353166353461663732663664616265316235313333626166 Dec 16 12:55:31.247148 kernel: audit: type=1334 audit(1765889731.222:436): prog-id=133 op=LOAD Dec 16 12:55:31.222000 audit[2859]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=2847 pid=2859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:31.249089 kernel: audit: type=1300 audit(1765889731.222:436): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=2847 pid=2859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:31.250867 containerd[1617]: time="2025-12-16T12:55:31.250623962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-w82cz,Uid:1ac7c11d-8bc0-4682-91a6-75cdd28eee69,Namespace:tigera-operator,Attempt:0,}" Dec 16 12:55:31.222000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135383866353166353461663732663664616265316235313333626166 Dec 16 12:55:31.253665 kernel: audit: type=1327 audit(1765889731.222:436): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135383866353166353461663732663664616265316235313333626166 Dec 16 12:55:31.222000 audit: BPF prog-id=134 op=LOAD Dec 16 12:55:31.222000 audit[2859]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=2847 pid=2859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:31.222000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135383866353166353461663732663664616265316235313333626166 Dec 16 12:55:31.222000 audit: BPF prog-id=134 op=UNLOAD Dec 16 12:55:31.222000 audit[2859]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2847 pid=2859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:31.222000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135383866353166353461663732663664616265316235313333626166 Dec 16 12:55:31.222000 audit: BPF prog-id=133 op=UNLOAD Dec 16 12:55:31.222000 audit[2859]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2847 pid=2859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:31.222000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135383866353166353461663732663664616265316235313333626166 Dec 16 12:55:31.222000 audit: BPF prog-id=135 op=LOAD Dec 16 12:55:31.222000 audit[2859]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=2847 pid=2859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:31.222000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135383866353166353461663732663664616265316235313333626166 Dec 16 12:55:31.284910 containerd[1617]: time="2025-12-16T12:55:31.284847297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z9vdx,Uid:107826fa-5f79-4e3f-adda-b03eba65230e,Namespace:kube-system,Attempt:0,} returns sandbox id \"a588f51f54af72f6dabe1b5133baf8f36a6d68f9f4475525f8c8f6378d211216\"" Dec 16 12:55:31.289825 kubelet[2789]: E1216 12:55:31.289043 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 12:55:31.298249 containerd[1617]: time="2025-12-16T12:55:31.298200687Z" level=info msg="connecting to shim 7cf739c3b669d21f73362505d21c0199e73e2859fc497495c7b083b7eb13173c" address="unix:///run/containerd/s/045c2c7c9cf97e229e771837e6f4ea943c87a845e286a174ad5ff91fa5f057e2" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:55:31.300226 containerd[1617]: time="2025-12-16T12:55:31.300177232Z" level=info msg="CreateContainer within sandbox \"a588f51f54af72f6dabe1b5133baf8f36a6d68f9f4475525f8c8f6378d211216\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 12:55:31.315654 containerd[1617]: time="2025-12-16T12:55:31.315562635Z" level=info msg="Container 9614814207d7100dfb3402849c9b387df7fb00afdf9317b23ed9faa6306fa378: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:55:31.326766 containerd[1617]: time="2025-12-16T12:55:31.326595413Z" level=info msg="CreateContainer within sandbox \"a588f51f54af72f6dabe1b5133baf8f36a6d68f9f4475525f8c8f6378d211216\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9614814207d7100dfb3402849c9b387df7fb00afdf9317b23ed9faa6306fa378\"" Dec 16 12:55:31.333708 containerd[1617]: time="2025-12-16T12:55:31.332143002Z" level=info msg="StartContainer for \"9614814207d7100dfb3402849c9b387df7fb00afdf9317b23ed9faa6306fa378\"" Dec 16 12:55:31.335426 containerd[1617]: time="2025-12-16T12:55:31.334949626Z" level=info msg="connecting to shim 9614814207d7100dfb3402849c9b387df7fb00afdf9317b23ed9faa6306fa378" address="unix:///run/containerd/s/81f7532bf5270871ade871df18ade075131c8633bf6a66e58c53a86604f6a588" protocol=ttrpc version=3 Dec 16 12:55:31.347027 systemd[1]: Started cri-containerd-7cf739c3b669d21f73362505d21c0199e73e2859fc497495c7b083b7eb13173c.scope - libcontainer container 7cf739c3b669d21f73362505d21c0199e73e2859fc497495c7b083b7eb13173c. Dec 16 12:55:31.366000 audit: BPF prog-id=136 op=LOAD Dec 16 12:55:31.367000 audit: BPF prog-id=137 op=LOAD Dec 16 12:55:31.367000 audit[2903]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=2890 pid=2903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:31.367000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763663733396333623636396432316637333336323530356432316330 Dec 16 12:55:31.367000 audit: BPF prog-id=137 op=UNLOAD Dec 16 12:55:31.367000 audit[2903]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2890 pid=2903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:31.367000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763663733396333623636396432316637333336323530356432316330 Dec 16 12:55:31.368000 audit: BPF prog-id=138 op=LOAD Dec 16 12:55:31.368000 audit[2903]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=2890 pid=2903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:31.368000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763663733396333623636396432316637333336323530356432316330 Dec 16 12:55:31.368000 audit: BPF prog-id=139 op=LOAD Dec 16 12:55:31.368000 audit[2903]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=2890 pid=2903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:31.368000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763663733396333623636396432316637333336323530356432316330 Dec 16 12:55:31.368000 audit: BPF prog-id=139 op=UNLOAD Dec 16 12:55:31.368000 audit[2903]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2890 pid=2903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:31.368000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763663733396333623636396432316637333336323530356432316330 Dec 16 12:55:31.368000 audit: BPF prog-id=138 op=UNLOAD Dec 16 12:55:31.368000 audit[2903]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2890 pid=2903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:31.368000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763663733396333623636396432316637333336323530356432316330 Dec 16 12:55:31.368000 audit: BPF prog-id=140 op=LOAD Dec 16 12:55:31.368000 audit[2903]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=2890 pid=2903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:31.368000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763663733396333623636396432316637333336323530356432316330 Dec 16 12:55:31.380315 systemd[1]: Started cri-containerd-9614814207d7100dfb3402849c9b387df7fb00afdf9317b23ed9faa6306fa378.scope - libcontainer container 9614814207d7100dfb3402849c9b387df7fb00afdf9317b23ed9faa6306fa378. Dec 16 12:55:31.430699 containerd[1617]: time="2025-12-16T12:55:31.428325738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-w82cz,Uid:1ac7c11d-8bc0-4682-91a6-75cdd28eee69,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"7cf739c3b669d21f73362505d21c0199e73e2859fc497495c7b083b7eb13173c\"" Dec 16 12:55:31.436661 containerd[1617]: time="2025-12-16T12:55:31.436596963Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 16 12:55:31.449000 audit: BPF prog-id=141 op=LOAD Dec 16 12:55:31.449000 audit[2916]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2847 pid=2916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:31.449000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936313438313432303764373130306466623334303238343963396233 Dec 16 12:55:31.449000 audit: BPF prog-id=142 op=LOAD Dec 16 12:55:31.449000 audit[2916]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2847 pid=2916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:31.449000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936313438313432303764373130306466623334303238343963396233 Dec 16 12:55:31.449000 audit: BPF prog-id=142 op=UNLOAD Dec 16 12:55:31.449000 audit[2916]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2847 pid=2916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:31.449000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936313438313432303764373130306466623334303238343963396233 Dec 16 12:55:31.449000 audit: BPF prog-id=141 op=UNLOAD Dec 16 12:55:31.449000 audit[2916]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2847 pid=2916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:31.449000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936313438313432303764373130306466623334303238343963396233 Dec 16 12:55:31.450000 audit: BPF prog-id=143 op=LOAD Dec 16 12:55:31.450000 audit[2916]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2847 pid=2916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:31.450000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936313438313432303764373130306466623334303238343963396233 Dec 16 12:55:31.485038 containerd[1617]: time="2025-12-16T12:55:31.484871908Z" level=info msg="StartContainer for \"9614814207d7100dfb3402849c9b387df7fb00afdf9317b23ed9faa6306fa378\" returns successfully" Dec 16 12:55:31.642654 kubelet[2789]: E1216 12:55:31.642067 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 12:55:31.879000 audit[2994]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=2994 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:55:31.879000 audit[2994]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe4d692730 a2=0 a3=7ffe4d69271c items=0 ppid=2936 pid=2994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:31.879000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 16 12:55:31.882000 audit[2995]: NETFILTER_CFG table=mangle:55 family=10 entries=1 op=nft_register_chain pid=2995 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:55:31.882000 audit[2995]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe26d3ca60 a2=0 a3=7ffe26d3ca4c items=0 ppid=2936 pid=2995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:31.882000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 16 12:55:31.884000 audit[2997]: NETFILTER_CFG table=nat:56 family=2 entries=1 op=nft_register_chain pid=2997 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:55:31.884000 audit[2997]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd0e331c20 a2=0 a3=7ffd0e331c0c items=0 ppid=2936 pid=2997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:31.884000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 16 12:55:31.886000 audit[2998]: NETFILTER_CFG table=nat:57 family=10 entries=1 op=nft_register_chain pid=2998 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:55:31.886000 audit[2998]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcaad7e350 a2=0 a3=7ffcaad7e33c items=0 ppid=2936 pid=2998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:31.886000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 16 12:55:31.888000 audit[2999]: NETFILTER_CFG table=filter:58 family=2 entries=1 op=nft_register_chain pid=2999 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:55:31.888000 audit[2999]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe98f16340 a2=0 a3=7ffe98f1632c items=0 ppid=2936 pid=2999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:31.888000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 16 12:55:31.895000 audit[3000]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=3000 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:55:31.895000 audit[3000]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdf3423570 a2=0 a3=7ffdf342355c items=0 ppid=2936 pid=3000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:31.895000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 16 12:55:32.002000 audit[3006]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3006 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:55:32.002000 audit[3006]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fffa9f34cb0 a2=0 a3=7fffa9f34c9c items=0 ppid=2936 pid=3006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:32.002000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 16 12:55:32.008000 audit[3008]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3008 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:55:32.008000 audit[3008]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff6820ce60 a2=0 a3=7fff6820ce4c items=0 ppid=2936 pid=3008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:32.008000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Dec 16 12:55:32.017000 audit[3011]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3011 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:55:32.017000 audit[3011]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffffd564b60 a2=0 a3=7ffffd564b4c items=0 ppid=2936 pid=3011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:32.017000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Dec 16 12:55:32.019000 audit[3012]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3012 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:55:32.019000 audit[3012]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffecde17330 a2=0 a3=7ffecde1731c items=0 ppid=2936 pid=3012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:32.019000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 16 12:55:32.024000 audit[3014]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3014 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:55:32.024000 audit[3014]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffc46f7160 a2=0 a3=7fffc46f714c items=0 ppid=2936 pid=3014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:32.024000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 16 12:55:32.027000 audit[3015]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3015 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:55:32.027000 audit[3015]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdb5083770 a2=0 a3=7ffdb508375c items=0 ppid=2936 pid=3015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:32.027000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 16 12:55:32.031000 audit[3017]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3017 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:55:32.031000 audit[3017]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fffebdb7e10 a2=0 a3=7fffebdb7dfc items=0 ppid=2936 pid=3017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:32.031000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 16 12:55:32.038000 audit[3020]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3020 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:55:32.038000 audit[3020]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffcbb591140 a2=0 a3=7ffcbb59112c items=0 ppid=2936 pid=3020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:32.038000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Dec 16 12:55:32.040000 audit[3021]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3021 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:55:32.040000 audit[3021]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffce4be51d0 a2=0 a3=7ffce4be51bc items=0 ppid=2936 pid=3021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:32.040000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 16 12:55:32.044000 audit[3023]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3023 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:55:32.044000 audit[3023]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe09b34380 a2=0 a3=7ffe09b3436c items=0 ppid=2936 pid=3023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:32.044000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 16 12:55:32.047000 audit[3024]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3024 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:55:32.047000 audit[3024]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc1b793770 a2=0 a3=7ffc1b79375c items=0 ppid=2936 pid=3024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:32.047000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 16 12:55:32.051000 audit[3026]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3026 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:55:32.051000 audit[3026]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd0bc2b7b0 a2=0 a3=7ffd0bc2b79c items=0 ppid=2936 pid=3026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:32.051000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 12:55:32.058000 audit[3029]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3029 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:55:32.058000 audit[3029]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffed4578700 a2=0 a3=7ffed45786ec items=0 ppid=2936 pid=3029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:32.058000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 12:55:32.065000 audit[3032]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3032 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:55:32.065000 audit[3032]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd944cd730 a2=0 a3=7ffd944cd71c items=0 ppid=2936 pid=3032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:32.065000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 16 12:55:32.067000 audit[3033]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3033 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:55:32.067000 audit[3033]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe0e4855c0 a2=0 a3=7ffe0e4855ac items=0 ppid=2936 pid=3033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:32.067000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 16 12:55:32.072000 audit[3035]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3035 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:55:32.072000 audit[3035]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffe36f621c0 a2=0 a3=7ffe36f621ac items=0 ppid=2936 pid=3035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:32.072000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 12:55:32.078000 audit[3038]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3038 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:55:32.078000 audit[3038]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe844a8a90 a2=0 a3=7ffe844a8a7c items=0 ppid=2936 pid=3038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:32.078000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 12:55:32.081000 audit[3039]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3039 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:55:32.081000 audit[3039]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc1814c580 a2=0 a3=7ffc1814c56c items=0 ppid=2936 pid=3039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:32.081000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 16 12:55:32.086000 audit[3041]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3041 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:55:32.086000 audit[3041]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7fffd28c2660 a2=0 a3=7fffd28c264c items=0 ppid=2936 pid=3041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:32.086000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 16 12:55:32.119000 audit[3047]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3047 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:55:32.119000 audit[3047]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd934f0d10 a2=0 a3=7ffd934f0cfc items=0 ppid=2936 pid=3047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:32.119000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:55:32.133000 audit[3047]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3047 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:55:32.133000 audit[3047]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffd934f0d10 a2=0 a3=7ffd934f0cfc items=0 ppid=2936 pid=3047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:32.133000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:55:32.137000 audit[3052]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3052 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:55:32.137000 audit[3052]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fff45831340 a2=0 a3=7fff4583132c items=0 ppid=2936 pid=3052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:32.137000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 16 12:55:32.141000 audit[3054]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3054 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:55:32.141000 audit[3054]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffc28b1d820 a2=0 a3=7ffc28b1d80c items=0 ppid=2936 pid=3054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:32.141000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Dec 16 12:55:32.148000 audit[3057]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3057 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:55:32.148000 audit[3057]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc275ef620 a2=0 a3=7ffc275ef60c items=0 ppid=2936 pid=3057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:32.148000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Dec 16 12:55:32.151000 audit[3058]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3058 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:55:32.151000 audit[3058]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe6c73a820 a2=0 a3=7ffe6c73a80c items=0 ppid=2936 pid=3058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:32.151000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 16 12:55:32.156000 audit[3060]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3060 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:55:32.156000 audit[3060]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdf5f0a460 a2=0 a3=7ffdf5f0a44c items=0 ppid=2936 pid=3060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:32.156000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 16 12:55:32.158000 audit[3061]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3061 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:55:32.158000 audit[3061]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd6f851b80 a2=0 a3=7ffd6f851b6c items=0 ppid=2936 pid=3061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:32.158000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 16 12:55:32.163000 audit[3063]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3063 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:55:32.163000 audit[3063]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe63027530 a2=0 a3=7ffe6302751c items=0 ppid=2936 pid=3063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:32.163000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Dec 16 12:55:32.169000 audit[3066]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3066 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:55:32.169000 audit[3066]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffc839b79e0 a2=0 a3=7ffc839b79cc items=0 ppid=2936 pid=3066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:32.169000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 16 12:55:32.171000 audit[3067]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3067 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:55:32.171000 audit[3067]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc5b6e0f60 a2=0 a3=7ffc5b6e0f4c items=0 ppid=2936 pid=3067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:32.171000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 16 12:55:32.175000 audit[3069]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3069 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:55:32.175000 audit[3069]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc5e4f15c0 a2=0 a3=7ffc5e4f15ac items=0 ppid=2936 pid=3069 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:32.175000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 16 12:55:32.177000 audit[3070]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3070 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:55:32.177000 audit[3070]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffea1394c0 a2=0 a3=7fffea1394ac items=0 ppid=2936 pid=3070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:32.177000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 16 12:55:32.181000 audit[3072]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3072 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:55:32.181000 audit[3072]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffccecfc940 a2=0 a3=7ffccecfc92c items=0 ppid=2936 pid=3072 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:32.181000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 12:55:32.187000 audit[3075]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3075 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:55:32.187000 audit[3075]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcfeab4430 a2=0 a3=7ffcfeab441c items=0 ppid=2936 pid=3075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:32.187000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 16 12:55:32.194000 audit[3078]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3078 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:55:32.194000 audit[3078]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff54ae5970 a2=0 a3=7fff54ae595c items=0 ppid=2936 pid=3078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:32.194000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Dec 16 12:55:32.196000 audit[3079]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3079 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:55:32.196000 audit[3079]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffddb1d1e80 a2=0 a3=7ffddb1d1e6c items=0 ppid=2936 pid=3079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:32.196000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 16 12:55:32.199000 audit[3081]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3081 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:55:32.199000 audit[3081]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffed3adedb0 a2=0 a3=7ffed3aded9c items=0 ppid=2936 pid=3081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:32.199000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 12:55:32.205000 audit[3084]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3084 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:55:32.205000 audit[3084]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd13e37590 a2=0 a3=7ffd13e3757c items=0 ppid=2936 pid=3084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:32.205000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 12:55:32.206000 audit[3085]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3085 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:55:32.206000 audit[3085]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd76fef1a0 a2=0 a3=7ffd76fef18c items=0 ppid=2936 pid=3085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:32.206000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 16 12:55:32.210000 audit[3087]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3087 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:55:32.210000 audit[3087]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffce6f0af10 a2=0 a3=7ffce6f0aefc items=0 ppid=2936 pid=3087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:32.210000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 16 12:55:32.212000 audit[3088]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3088 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:55:32.212000 audit[3088]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff9b07e6c0 a2=0 a3=7fff9b07e6ac items=0 ppid=2936 pid=3088 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:32.212000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 16 12:55:32.216000 audit[3090]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3090 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:55:32.216000 audit[3090]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffdb8b6ddb0 a2=0 a3=7ffdb8b6dd9c items=0 ppid=2936 pid=3090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:32.216000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 12:55:32.226000 audit[3093]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3093 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:55:32.226000 audit[3093]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffcd800b140 a2=0 a3=7ffcd800b12c items=0 ppid=2936 pid=3093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:32.226000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 12:55:32.233000 audit[3095]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3095 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 16 12:55:32.233000 audit[3095]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffef3961680 a2=0 a3=7ffef396166c items=0 ppid=2936 pid=3095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:32.233000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:55:32.233000 audit[3095]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3095 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 16 12:55:32.233000 audit[3095]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffef3961680 a2=0 a3=7ffef396166c items=0 ppid=2936 pid=3095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:32.233000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:55:32.476618 kubelet[2789]: E1216 12:55:32.476562 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 12:55:32.476618 kubelet[2789]: E1216 12:55:32.476562 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 12:55:32.915502 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2573985638.mount: Deactivated successfully. Dec 16 12:55:33.480219 kubelet[2789]: E1216 12:55:33.480175 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 12:55:33.482395 kubelet[2789]: E1216 12:55:33.481138 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 12:55:33.751691 containerd[1617]: time="2025-12-16T12:55:33.750242355Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:55:33.751691 containerd[1617]: time="2025-12-16T12:55:33.751214780Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Dec 16 12:55:33.751691 containerd[1617]: time="2025-12-16T12:55:33.751430960Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:55:33.753861 containerd[1617]: time="2025-12-16T12:55:33.753815909Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:55:33.755109 containerd[1617]: time="2025-12-16T12:55:33.755062596Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.318427073s" Dec 16 12:55:33.755109 containerd[1617]: time="2025-12-16T12:55:33.755100285Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Dec 16 12:55:33.761599 containerd[1617]: time="2025-12-16T12:55:33.761545367Z" level=info msg="CreateContainer within sandbox \"7cf739c3b669d21f73362505d21c0199e73e2859fc497495c7b083b7eb13173c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 16 12:55:33.769662 containerd[1617]: time="2025-12-16T12:55:33.769294820Z" level=info msg="Container 17fecce7a3c4512b2a80bb72c237106e273967aa97e642d2df56269ee4400803: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:55:33.775801 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount156576624.mount: Deactivated successfully. Dec 16 12:55:33.782540 containerd[1617]: time="2025-12-16T12:55:33.782467398Z" level=info msg="CreateContainer within sandbox \"7cf739c3b669d21f73362505d21c0199e73e2859fc497495c7b083b7eb13173c\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"17fecce7a3c4512b2a80bb72c237106e273967aa97e642d2df56269ee4400803\"" Dec 16 12:55:33.783353 containerd[1617]: time="2025-12-16T12:55:33.783320020Z" level=info msg="StartContainer for \"17fecce7a3c4512b2a80bb72c237106e273967aa97e642d2df56269ee4400803\"" Dec 16 12:55:33.785099 containerd[1617]: time="2025-12-16T12:55:33.784559102Z" level=info msg="connecting to shim 17fecce7a3c4512b2a80bb72c237106e273967aa97e642d2df56269ee4400803" address="unix:///run/containerd/s/045c2c7c9cf97e229e771837e6f4ea943c87a845e286a174ad5ff91fa5f057e2" protocol=ttrpc version=3 Dec 16 12:55:33.819974 systemd[1]: Started cri-containerd-17fecce7a3c4512b2a80bb72c237106e273967aa97e642d2df56269ee4400803.scope - libcontainer container 17fecce7a3c4512b2a80bb72c237106e273967aa97e642d2df56269ee4400803. Dec 16 12:55:33.835000 audit: BPF prog-id=144 op=LOAD Dec 16 12:55:33.836000 audit: BPF prog-id=145 op=LOAD Dec 16 12:55:33.836000 audit[3104]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2890 pid=3104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:33.836000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137666563636537613363343531326232613830626237326332333731 Dec 16 12:55:33.836000 audit: BPF prog-id=145 op=UNLOAD Dec 16 12:55:33.836000 audit[3104]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2890 pid=3104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:33.836000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137666563636537613363343531326232613830626237326332333731 Dec 16 12:55:33.836000 audit: BPF prog-id=146 op=LOAD Dec 16 12:55:33.836000 audit[3104]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2890 pid=3104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:33.836000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137666563636537613363343531326232613830626237326332333731 Dec 16 12:55:33.837000 audit: BPF prog-id=147 op=LOAD Dec 16 12:55:33.837000 audit[3104]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2890 pid=3104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:33.837000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137666563636537613363343531326232613830626237326332333731 Dec 16 12:55:33.837000 audit: BPF prog-id=147 op=UNLOAD Dec 16 12:55:33.837000 audit[3104]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2890 pid=3104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:33.837000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137666563636537613363343531326232613830626237326332333731 Dec 16 12:55:33.837000 audit: BPF prog-id=146 op=UNLOAD Dec 16 12:55:33.837000 audit[3104]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2890 pid=3104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:33.837000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137666563636537613363343531326232613830626237326332333731 Dec 16 12:55:33.837000 audit: BPF prog-id=148 op=LOAD Dec 16 12:55:33.837000 audit[3104]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2890 pid=3104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:33.837000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137666563636537613363343531326232613830626237326332333731 Dec 16 12:55:33.874460 containerd[1617]: time="2025-12-16T12:55:33.874344226Z" level=info msg="StartContainer for \"17fecce7a3c4512b2a80bb72c237106e273967aa97e642d2df56269ee4400803\" returns successfully" Dec 16 12:55:34.509449 kubelet[2789]: I1216 12:55:34.508896 2789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-z9vdx" podStartSLOduration=4.5088731079999995 podStartE2EDuration="4.508873108s" podCreationTimestamp="2025-12-16 12:55:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:55:32.490338141 +0000 UTC m=+6.315807583" watchObservedRunningTime="2025-12-16 12:55:34.508873108 +0000 UTC m=+8.334342539" Dec 16 12:55:37.226267 kubelet[2789]: E1216 12:55:37.226177 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 12:55:37.272160 kubelet[2789]: I1216 12:55:37.272095 2789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-w82cz" podStartSLOduration=4.95044665 podStartE2EDuration="7.272059853s" podCreationTimestamp="2025-12-16 12:55:30 +0000 UTC" firstStartedPulling="2025-12-16 12:55:31.435828203 +0000 UTC m=+5.261297598" lastFinishedPulling="2025-12-16 12:55:33.757441408 +0000 UTC m=+7.582910801" observedRunningTime="2025-12-16 12:55:34.509226027 +0000 UTC m=+8.334695458" watchObservedRunningTime="2025-12-16 12:55:37.272059853 +0000 UTC m=+11.097529275" Dec 16 12:55:37.279717 systemd[1]: cri-containerd-17fecce7a3c4512b2a80bb72c237106e273967aa97e642d2df56269ee4400803.scope: Deactivated successfully. Dec 16 12:55:37.285141 kernel: kauditd_printk_skb: 224 callbacks suppressed Dec 16 12:55:37.285284 kernel: audit: type=1334 audit(1765889737.282:513): prog-id=144 op=UNLOAD Dec 16 12:55:37.282000 audit: BPF prog-id=144 op=UNLOAD Dec 16 12:55:37.282000 audit: BPF prog-id=148 op=UNLOAD Dec 16 12:55:37.288818 kernel: audit: type=1334 audit(1765889737.282:514): prog-id=148 op=UNLOAD Dec 16 12:55:37.353339 containerd[1617]: time="2025-12-16T12:55:37.353162521Z" level=info msg="received container exit event container_id:\"17fecce7a3c4512b2a80bb72c237106e273967aa97e642d2df56269ee4400803\" id:\"17fecce7a3c4512b2a80bb72c237106e273967aa97e642d2df56269ee4400803\" pid:3117 exit_status:1 exited_at:{seconds:1765889737 nanos:283648493}" Dec 16 12:55:37.392299 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-17fecce7a3c4512b2a80bb72c237106e273967aa97e642d2df56269ee4400803-rootfs.mount: Deactivated successfully. Dec 16 12:55:37.493906 kubelet[2789]: I1216 12:55:37.493396 2789 scope.go:117] "RemoveContainer" containerID="17fecce7a3c4512b2a80bb72c237106e273967aa97e642d2df56269ee4400803" Dec 16 12:55:37.494598 kubelet[2789]: E1216 12:55:37.494337 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 12:55:37.498274 containerd[1617]: time="2025-12-16T12:55:37.498240403Z" level=info msg="CreateContainer within sandbox \"7cf739c3b669d21f73362505d21c0199e73e2859fc497495c7b083b7eb13173c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Dec 16 12:55:37.516061 containerd[1617]: time="2025-12-16T12:55:37.515227884Z" level=info msg="Container 98c0a6ccd4f2d923b976e9a1f84597918151eac910121d8fbd93009413c24de6: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:55:37.521509 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1623296795.mount: Deactivated successfully. Dec 16 12:55:37.532924 containerd[1617]: time="2025-12-16T12:55:37.532880469Z" level=info msg="CreateContainer within sandbox \"7cf739c3b669d21f73362505d21c0199e73e2859fc497495c7b083b7eb13173c\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"98c0a6ccd4f2d923b976e9a1f84597918151eac910121d8fbd93009413c24de6\"" Dec 16 12:55:37.535256 containerd[1617]: time="2025-12-16T12:55:37.535112508Z" level=info msg="StartContainer for \"98c0a6ccd4f2d923b976e9a1f84597918151eac910121d8fbd93009413c24de6\"" Dec 16 12:55:37.539748 containerd[1617]: time="2025-12-16T12:55:37.539702113Z" level=info msg="connecting to shim 98c0a6ccd4f2d923b976e9a1f84597918151eac910121d8fbd93009413c24de6" address="unix:///run/containerd/s/045c2c7c9cf97e229e771837e6f4ea943c87a845e286a174ad5ff91fa5f057e2" protocol=ttrpc version=3 Dec 16 12:55:37.586450 systemd[1]: Started cri-containerd-98c0a6ccd4f2d923b976e9a1f84597918151eac910121d8fbd93009413c24de6.scope - libcontainer container 98c0a6ccd4f2d923b976e9a1f84597918151eac910121d8fbd93009413c24de6. Dec 16 12:55:37.799687 kernel: audit: type=1334 audit(1765889737.796:515): prog-id=149 op=LOAD Dec 16 12:55:37.796000 audit: BPF prog-id=149 op=LOAD Dec 16 12:55:37.797000 audit: BPF prog-id=150 op=LOAD Dec 16 12:55:37.801862 kernel: audit: type=1334 audit(1765889737.797:516): prog-id=150 op=LOAD Dec 16 12:55:37.797000 audit[3161]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2890 pid=3161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:37.806658 kernel: audit: type=1300 audit(1765889737.797:516): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2890 pid=3161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:37.797000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938633061366363643466326439323362393736653961316638343539 Dec 16 12:55:37.811683 kernel: audit: type=1327 audit(1765889737.797:516): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938633061366363643466326439323362393736653961316638343539 Dec 16 12:55:37.797000 audit: BPF prog-id=150 op=UNLOAD Dec 16 12:55:37.818946 kernel: audit: type=1334 audit(1765889737.797:517): prog-id=150 op=UNLOAD Dec 16 12:55:37.819059 kernel: audit: type=1300 audit(1765889737.797:517): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2890 pid=3161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:37.797000 audit[3161]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2890 pid=3161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:37.797000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938633061366363643466326439323362393736653961316638343539 Dec 16 12:55:37.828713 kernel: audit: type=1327 audit(1765889737.797:517): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938633061366363643466326439323362393736653961316638343539 Dec 16 12:55:37.797000 audit: BPF prog-id=151 op=LOAD Dec 16 12:55:37.797000 audit[3161]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2890 pid=3161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:37.797000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938633061366363643466326439323362393736653961316638343539 Dec 16 12:55:37.797000 audit: BPF prog-id=152 op=LOAD Dec 16 12:55:37.797000 audit[3161]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2890 pid=3161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:37.797000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938633061366363643466326439323362393736653961316638343539 Dec 16 12:55:37.798000 audit: BPF prog-id=152 op=UNLOAD Dec 16 12:55:37.798000 audit[3161]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2890 pid=3161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:37.798000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938633061366363643466326439323362393736653961316638343539 Dec 16 12:55:37.798000 audit: BPF prog-id=151 op=UNLOAD Dec 16 12:55:37.798000 audit[3161]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2890 pid=3161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:37.798000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938633061366363643466326439323362393736653961316638343539 Dec 16 12:55:37.832658 kernel: audit: type=1334 audit(1765889737.797:518): prog-id=151 op=LOAD Dec 16 12:55:37.798000 audit: BPF prog-id=153 op=LOAD Dec 16 12:55:37.798000 audit[3161]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2890 pid=3161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:37.798000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938633061366363643466326439323362393736653961316638343539 Dec 16 12:55:37.861481 containerd[1617]: time="2025-12-16T12:55:37.861417610Z" level=info msg="StartContainer for \"98c0a6ccd4f2d923b976e9a1f84597918151eac910121d8fbd93009413c24de6\" returns successfully" Dec 16 12:55:38.645828 update_engine[1587]: I20251216 12:55:38.645752 1587 update_attempter.cc:509] Updating boot flags... Dec 16 12:55:40.561025 sudo[1853]: pam_unix(sudo:session): session closed for user root Dec 16 12:55:40.560000 audit[1853]: USER_END pid=1853 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:55:40.560000 audit[1853]: CRED_DISP pid=1853 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:55:40.565810 sshd[1852]: Connection closed by 147.75.109.163 port 40870 Dec 16 12:55:40.566174 sshd-session[1849]: pam_unix(sshd:session): session closed for user core Dec 16 12:55:40.567000 audit[1849]: USER_END pid=1849 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:55:40.567000 audit[1849]: CRED_DISP pid=1849 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:55:40.571819 systemd[1]: sshd@6-164.90.155.252:22-147.75.109.163:40870.service: Deactivated successfully. Dec 16 12:55:40.571000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-164.90.155.252:22-147.75.109.163:40870 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:55:40.575469 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 12:55:40.575848 systemd[1]: session-7.scope: Consumed 6.337s CPU time, 159.4M memory peak. Dec 16 12:55:40.578184 systemd-logind[1584]: Session 7 logged out. Waiting for processes to exit. Dec 16 12:55:40.580607 systemd-logind[1584]: Removed session 7. Dec 16 12:55:42.696013 kernel: kauditd_printk_skb: 19 callbacks suppressed Dec 16 12:55:42.696162 kernel: audit: type=1325 audit(1765889742.690:528): table=filter:105 family=2 entries=14 op=nft_register_rule pid=3243 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:55:42.690000 audit[3243]: NETFILTER_CFG table=filter:105 family=2 entries=14 op=nft_register_rule pid=3243 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:55:42.690000 audit[3243]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd396c7860 a2=0 a3=7ffd396c784c items=0 ppid=2936 pid=3243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:42.704656 kernel: audit: type=1300 audit(1765889742.690:528): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd396c7860 a2=0 a3=7ffd396c784c items=0 ppid=2936 pid=3243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:42.690000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:55:42.708659 kernel: audit: type=1327 audit(1765889742.690:528): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:55:42.710000 audit[3243]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3243 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:55:42.714698 kernel: audit: type=1325 audit(1765889742.710:529): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3243 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:55:42.710000 audit[3243]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd396c7860 a2=0 a3=0 items=0 ppid=2936 pid=3243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:42.720953 kernel: audit: type=1300 audit(1765889742.710:529): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd396c7860 a2=0 a3=0 items=0 ppid=2936 pid=3243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:42.724799 kernel: audit: type=1327 audit(1765889742.710:529): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:55:42.710000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:55:42.747000 audit[3245]: NETFILTER_CFG table=filter:107 family=2 entries=15 op=nft_register_rule pid=3245 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:55:42.752655 kernel: audit: type=1325 audit(1765889742.747:530): table=filter:107 family=2 entries=15 op=nft_register_rule pid=3245 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:55:42.747000 audit[3245]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffeb2153390 a2=0 a3=7ffeb215337c items=0 ppid=2936 pid=3245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:42.758624 kernel: audit: type=1300 audit(1765889742.747:530): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffeb2153390 a2=0 a3=7ffeb215337c items=0 ppid=2936 pid=3245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:42.747000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:55:42.762650 kernel: audit: type=1327 audit(1765889742.747:530): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:55:42.765674 kernel: audit: type=1325 audit(1765889742.761:531): table=nat:108 family=2 entries=12 op=nft_register_rule pid=3245 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:55:42.761000 audit[3245]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3245 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:55:42.761000 audit[3245]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffeb2153390 a2=0 a3=0 items=0 ppid=2936 pid=3245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:42.761000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:55:45.318000 audit[3247]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3247 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:55:45.318000 audit[3247]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7fff15178760 a2=0 a3=7fff1517874c items=0 ppid=2936 pid=3247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:45.318000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:55:45.323000 audit[3247]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3247 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:55:45.323000 audit[3247]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff15178760 a2=0 a3=0 items=0 ppid=2936 pid=3247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:45.323000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:55:45.362000 audit[3249]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3249 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:55:45.362000 audit[3249]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffcfe2987f0 a2=0 a3=7ffcfe2987dc items=0 ppid=2936 pid=3249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:45.362000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:55:45.367000 audit[3249]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3249 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:55:45.367000 audit[3249]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcfe2987f0 a2=0 a3=0 items=0 ppid=2936 pid=3249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:45.367000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:55:46.451000 audit[3251]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3251 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:55:46.451000 audit[3251]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc89636c10 a2=0 a3=7ffc89636bfc items=0 ppid=2936 pid=3251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:46.451000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:55:46.457000 audit[3251]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3251 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:55:46.457000 audit[3251]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc89636c10 a2=0 a3=0 items=0 ppid=2936 pid=3251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:46.457000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:55:47.418000 audit[3254]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3254 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:55:47.418000 audit[3254]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffccbbe6400 a2=0 a3=7ffccbbe63ec items=0 ppid=2936 pid=3254 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:47.418000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:55:47.439000 audit[3254]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3254 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:55:47.453479 systemd[1]: Created slice kubepods-besteffort-podcae2e9e2_c1c3_4b02_98a6_805058edcd31.slice - libcontainer container kubepods-besteffort-podcae2e9e2_c1c3_4b02_98a6_805058edcd31.slice. Dec 16 12:55:47.439000 audit[3254]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffccbbe6400 a2=0 a3=0 items=0 ppid=2936 pid=3254 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:47.439000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:55:47.495946 kubelet[2789]: I1216 12:55:47.495843 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pxvf\" (UniqueName: \"kubernetes.io/projected/cae2e9e2-c1c3-4b02-98a6-805058edcd31-kube-api-access-9pxvf\") pod \"calico-typha-74886999c5-kss6h\" (UID: \"cae2e9e2-c1c3-4b02-98a6-805058edcd31\") " pod="calico-system/calico-typha-74886999c5-kss6h" Dec 16 12:55:47.495946 kubelet[2789]: I1216 12:55:47.495894 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cae2e9e2-c1c3-4b02-98a6-805058edcd31-tigera-ca-bundle\") pod \"calico-typha-74886999c5-kss6h\" (UID: \"cae2e9e2-c1c3-4b02-98a6-805058edcd31\") " pod="calico-system/calico-typha-74886999c5-kss6h" Dec 16 12:55:47.495946 kubelet[2789]: I1216 12:55:47.495944 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/cae2e9e2-c1c3-4b02-98a6-805058edcd31-typha-certs\") pod \"calico-typha-74886999c5-kss6h\" (UID: \"cae2e9e2-c1c3-4b02-98a6-805058edcd31\") " pod="calico-system/calico-typha-74886999c5-kss6h" Dec 16 12:55:47.739749 systemd[1]: Created slice kubepods-besteffort-pode0e19435_fce2_493e_a320_c227ac114170.slice - libcontainer container kubepods-besteffort-pode0e19435_fce2_493e_a320_c227ac114170.slice. Dec 16 12:55:47.758904 kubelet[2789]: E1216 12:55:47.758611 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 12:55:47.760191 containerd[1617]: time="2025-12-16T12:55:47.760112782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-74886999c5-kss6h,Uid:cae2e9e2-c1c3-4b02-98a6-805058edcd31,Namespace:calico-system,Attempt:0,}" Dec 16 12:55:47.799520 kubelet[2789]: I1216 12:55:47.798294 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e0e19435-fce2-493e-a320-c227ac114170-cni-bin-dir\") pod \"calico-node-s2m79\" (UID: \"e0e19435-fce2-493e-a320-c227ac114170\") " pod="calico-system/calico-node-s2m79" Dec 16 12:55:47.799520 kubelet[2789]: I1216 12:55:47.798360 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e0e19435-fce2-493e-a320-c227ac114170-cni-log-dir\") pod \"calico-node-s2m79\" (UID: \"e0e19435-fce2-493e-a320-c227ac114170\") " pod="calico-system/calico-node-s2m79" Dec 16 12:55:47.799520 kubelet[2789]: I1216 12:55:47.798388 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e0e19435-fce2-493e-a320-c227ac114170-cni-net-dir\") pod \"calico-node-s2m79\" (UID: \"e0e19435-fce2-493e-a320-c227ac114170\") " pod="calico-system/calico-node-s2m79" Dec 16 12:55:47.799520 kubelet[2789]: I1216 12:55:47.798414 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e0e19435-fce2-493e-a320-c227ac114170-var-run-calico\") pod \"calico-node-s2m79\" (UID: \"e0e19435-fce2-493e-a320-c227ac114170\") " pod="calico-system/calico-node-s2m79" Dec 16 12:55:47.799520 kubelet[2789]: I1216 12:55:47.798439 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e0e19435-fce2-493e-a320-c227ac114170-node-certs\") pod \"calico-node-s2m79\" (UID: \"e0e19435-fce2-493e-a320-c227ac114170\") " pod="calico-system/calico-node-s2m79" Dec 16 12:55:47.799793 kubelet[2789]: I1216 12:55:47.798461 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9t7fw\" (UniqueName: \"kubernetes.io/projected/e0e19435-fce2-493e-a320-c227ac114170-kube-api-access-9t7fw\") pod \"calico-node-s2m79\" (UID: \"e0e19435-fce2-493e-a320-c227ac114170\") " pod="calico-system/calico-node-s2m79" Dec 16 12:55:47.799793 kubelet[2789]: I1216 12:55:47.798486 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e0e19435-fce2-493e-a320-c227ac114170-lib-modules\") pod \"calico-node-s2m79\" (UID: \"e0e19435-fce2-493e-a320-c227ac114170\") " pod="calico-system/calico-node-s2m79" Dec 16 12:55:47.799793 kubelet[2789]: I1216 12:55:47.798514 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e0e19435-fce2-493e-a320-c227ac114170-flexvol-driver-host\") pod \"calico-node-s2m79\" (UID: \"e0e19435-fce2-493e-a320-c227ac114170\") " pod="calico-system/calico-node-s2m79" Dec 16 12:55:47.799793 kubelet[2789]: I1216 12:55:47.798539 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e0e19435-fce2-493e-a320-c227ac114170-policysync\") pod \"calico-node-s2m79\" (UID: \"e0e19435-fce2-493e-a320-c227ac114170\") " pod="calico-system/calico-node-s2m79" Dec 16 12:55:47.799793 kubelet[2789]: I1216 12:55:47.798561 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0e19435-fce2-493e-a320-c227ac114170-tigera-ca-bundle\") pod \"calico-node-s2m79\" (UID: \"e0e19435-fce2-493e-a320-c227ac114170\") " pod="calico-system/calico-node-s2m79" Dec 16 12:55:47.799916 kubelet[2789]: I1216 12:55:47.798591 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e0e19435-fce2-493e-a320-c227ac114170-xtables-lock\") pod \"calico-node-s2m79\" (UID: \"e0e19435-fce2-493e-a320-c227ac114170\") " pod="calico-system/calico-node-s2m79" Dec 16 12:55:47.799916 kubelet[2789]: I1216 12:55:47.798611 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e0e19435-fce2-493e-a320-c227ac114170-var-lib-calico\") pod \"calico-node-s2m79\" (UID: \"e0e19435-fce2-493e-a320-c227ac114170\") " pod="calico-system/calico-node-s2m79" Dec 16 12:55:47.812600 containerd[1617]: time="2025-12-16T12:55:47.812488842Z" level=info msg="connecting to shim 9d69be12776bebeacbc72f8e3023aaee8f40290a343d49e931de98dea3da6264" address="unix:///run/containerd/s/4a6f261e5692491975d723fdf90c9fc42e96c0f5ec6f9800e47a2131f6518f0a" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:55:47.845994 systemd[1]: Started cri-containerd-9d69be12776bebeacbc72f8e3023aaee8f40290a343d49e931de98dea3da6264.scope - libcontainer container 9d69be12776bebeacbc72f8e3023aaee8f40290a343d49e931de98dea3da6264. Dec 16 12:55:47.878234 kernel: kauditd_printk_skb: 26 callbacks suppressed Dec 16 12:55:47.878377 kernel: audit: type=1334 audit(1765889747.874:540): prog-id=154 op=LOAD Dec 16 12:55:47.874000 audit: BPF prog-id=154 op=LOAD Dec 16 12:55:47.878000 audit: BPF prog-id=155 op=LOAD Dec 16 12:55:47.882237 kernel: audit: type=1334 audit(1765889747.878:541): prog-id=155 op=LOAD Dec 16 12:55:47.878000 audit[3277]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=3265 pid=3277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:47.888807 kernel: audit: type=1300 audit(1765889747.878:541): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=3265 pid=3277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:47.878000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964363962653132373736626562656163626337326638653330323361 Dec 16 12:55:47.893794 kernel: audit: type=1327 audit(1765889747.878:541): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964363962653132373736626562656163626337326638653330323361 Dec 16 12:55:47.895710 kernel: audit: type=1334 audit(1765889747.878:542): prog-id=155 op=UNLOAD Dec 16 12:55:47.878000 audit: BPF prog-id=155 op=UNLOAD Dec 16 12:55:47.878000 audit[3277]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3265 pid=3277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:47.900664 kernel: audit: type=1300 audit(1765889747.878:542): arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3265 pid=3277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:47.878000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964363962653132373736626562656163626337326638653330323361 Dec 16 12:55:47.906689 kernel: audit: type=1327 audit(1765889747.878:542): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964363962653132373736626562656163626337326638653330323361 Dec 16 12:55:47.878000 audit: BPF prog-id=156 op=LOAD Dec 16 12:55:47.921734 kubelet[2789]: E1216 12:55:47.919914 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:47.921734 kubelet[2789]: W1216 12:55:47.919946 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:47.921734 kubelet[2789]: E1216 12:55:47.919982 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:47.926653 kernel: audit: type=1334 audit(1765889747.878:543): prog-id=156 op=LOAD Dec 16 12:55:47.928579 kubelet[2789]: E1216 12:55:47.927736 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:47.928579 kubelet[2789]: W1216 12:55:47.928416 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:47.878000 audit[3277]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3265 pid=3277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:47.930449 kubelet[2789]: E1216 12:55:47.929188 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:47.933670 kernel: audit: type=1300 audit(1765889747.878:543): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3265 pid=3277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:47.878000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964363962653132373736626562656163626337326638653330323361 Dec 16 12:55:47.941357 kernel: audit: type=1327 audit(1765889747.878:543): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964363962653132373736626562656163626337326638653330323361 Dec 16 12:55:47.878000 audit: BPF prog-id=157 op=LOAD Dec 16 12:55:47.878000 audit[3277]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3265 pid=3277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:47.878000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964363962653132373736626562656163626337326638653330323361 Dec 16 12:55:47.878000 audit: BPF prog-id=157 op=UNLOAD Dec 16 12:55:47.878000 audit[3277]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3265 pid=3277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:47.878000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964363962653132373736626562656163626337326638653330323361 Dec 16 12:55:47.878000 audit: BPF prog-id=156 op=UNLOAD Dec 16 12:55:47.878000 audit[3277]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3265 pid=3277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:47.878000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964363962653132373736626562656163626337326638653330323361 Dec 16 12:55:47.878000 audit: BPF prog-id=158 op=LOAD Dec 16 12:55:47.878000 audit[3277]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3265 pid=3277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:47.878000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964363962653132373736626562656163626337326638653330323361 Dec 16 12:55:47.945560 kubelet[2789]: E1216 12:55:47.945265 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:47.945560 kubelet[2789]: W1216 12:55:47.945447 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:47.945560 kubelet[2789]: E1216 12:55:47.945491 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:47.973262 kubelet[2789]: E1216 12:55:47.972919 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rmjtf" podUID="7b89c039-0754-43bd-ad85-5506dee48dad" Dec 16 12:55:47.981130 kubelet[2789]: E1216 12:55:47.980950 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:47.981130 kubelet[2789]: W1216 12:55:47.980981 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:47.981130 kubelet[2789]: E1216 12:55:47.981008 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:47.982011 kubelet[2789]: E1216 12:55:47.981976 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:47.982011 kubelet[2789]: W1216 12:55:47.981999 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:47.982186 kubelet[2789]: E1216 12:55:47.982021 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:47.982291 kubelet[2789]: E1216 12:55:47.982244 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:47.982291 kubelet[2789]: W1216 12:55:47.982255 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:47.982291 kubelet[2789]: E1216 12:55:47.982267 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:47.982835 kubelet[2789]: E1216 12:55:47.982818 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:47.982835 kubelet[2789]: W1216 12:55:47.982832 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:47.982950 kubelet[2789]: E1216 12:55:47.982849 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:47.983226 kubelet[2789]: E1216 12:55:47.983211 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:47.983226 kubelet[2789]: W1216 12:55:47.983222 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:47.983324 kubelet[2789]: E1216 12:55:47.983233 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:47.983452 kubelet[2789]: E1216 12:55:47.983437 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:47.983452 kubelet[2789]: W1216 12:55:47.983451 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:47.983533 kubelet[2789]: E1216 12:55:47.983462 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:47.984029 kubelet[2789]: E1216 12:55:47.984005 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:47.984097 kubelet[2789]: W1216 12:55:47.984024 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:47.984097 kubelet[2789]: E1216 12:55:47.984060 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:47.984799 kubelet[2789]: E1216 12:55:47.984781 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:47.984799 kubelet[2789]: W1216 12:55:47.984795 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:47.984904 kubelet[2789]: E1216 12:55:47.984808 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:47.985440 kubelet[2789]: E1216 12:55:47.985419 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:47.985440 kubelet[2789]: W1216 12:55:47.985433 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:47.985440 kubelet[2789]: E1216 12:55:47.985445 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:47.986076 kubelet[2789]: E1216 12:55:47.986057 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:47.986076 kubelet[2789]: W1216 12:55:47.986073 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:47.986191 kubelet[2789]: E1216 12:55:47.986085 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:47.986791 kubelet[2789]: E1216 12:55:47.986773 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:47.986791 kubelet[2789]: W1216 12:55:47.986788 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:47.986933 kubelet[2789]: E1216 12:55:47.986800 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:47.986978 kubelet[2789]: E1216 12:55:47.986955 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:47.986978 kubelet[2789]: W1216 12:55:47.986962 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:47.986978 kubelet[2789]: E1216 12:55:47.986969 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:47.987821 kubelet[2789]: E1216 12:55:47.987801 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:47.987821 kubelet[2789]: W1216 12:55:47.987815 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:47.987945 kubelet[2789]: E1216 12:55:47.987828 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:47.987990 kubelet[2789]: E1216 12:55:47.987972 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:47.987990 kubelet[2789]: W1216 12:55:47.987978 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:47.987990 kubelet[2789]: E1216 12:55:47.987985 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:47.988132 kubelet[2789]: E1216 12:55:47.988116 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:47.988132 kubelet[2789]: W1216 12:55:47.988126 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:47.988132 kubelet[2789]: E1216 12:55:47.988133 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:47.989109 kubelet[2789]: E1216 12:55:47.988765 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:47.989109 kubelet[2789]: W1216 12:55:47.988780 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:47.989109 kubelet[2789]: E1216 12:55:47.988792 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:47.989259 kubelet[2789]: E1216 12:55:47.989189 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:47.989259 kubelet[2789]: W1216 12:55:47.989202 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:47.989259 kubelet[2789]: E1216 12:55:47.989215 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:47.990516 kubelet[2789]: E1216 12:55:47.989956 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:47.990516 kubelet[2789]: W1216 12:55:47.989973 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:47.990516 kubelet[2789]: E1216 12:55:47.989985 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:47.990516 kubelet[2789]: E1216 12:55:47.990375 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:47.990516 kubelet[2789]: W1216 12:55:47.990385 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:47.990516 kubelet[2789]: E1216 12:55:47.990398 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:47.991805 kubelet[2789]: E1216 12:55:47.991785 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:47.991805 kubelet[2789]: W1216 12:55:47.991800 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:47.991939 kubelet[2789]: E1216 12:55:47.991813 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:48.006182 kubelet[2789]: E1216 12:55:48.005128 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:48.006182 kubelet[2789]: W1216 12:55:48.005163 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:48.006182 kubelet[2789]: E1216 12:55:48.005192 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:48.006182 kubelet[2789]: I1216 12:55:48.005234 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwc8p\" (UniqueName: \"kubernetes.io/projected/7b89c039-0754-43bd-ad85-5506dee48dad-kube-api-access-lwc8p\") pod \"csi-node-driver-rmjtf\" (UID: \"7b89c039-0754-43bd-ad85-5506dee48dad\") " pod="calico-system/csi-node-driver-rmjtf" Dec 16 12:55:48.006182 kubelet[2789]: E1216 12:55:48.006149 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:48.006182 kubelet[2789]: W1216 12:55:48.006179 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:48.006448 kubelet[2789]: E1216 12:55:48.006200 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:48.006448 kubelet[2789]: I1216 12:55:48.006232 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7b89c039-0754-43bd-ad85-5506dee48dad-varrun\") pod \"csi-node-driver-rmjtf\" (UID: \"7b89c039-0754-43bd-ad85-5506dee48dad\") " pod="calico-system/csi-node-driver-rmjtf" Dec 16 12:55:48.008910 kubelet[2789]: E1216 12:55:48.008736 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:48.009626 kubelet[2789]: W1216 12:55:48.009362 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:48.010302 kubelet[2789]: E1216 12:55:48.010203 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:48.010821 kubelet[2789]: I1216 12:55:48.010600 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7b89c039-0754-43bd-ad85-5506dee48dad-registration-dir\") pod \"csi-node-driver-rmjtf\" (UID: \"7b89c039-0754-43bd-ad85-5506dee48dad\") " pod="calico-system/csi-node-driver-rmjtf" Dec 16 12:55:48.011702 kubelet[2789]: E1216 12:55:48.011106 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:48.011702 kubelet[2789]: W1216 12:55:48.011516 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:48.011702 kubelet[2789]: E1216 12:55:48.011546 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:48.013050 kubelet[2789]: E1216 12:55:48.012560 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:48.013050 kubelet[2789]: W1216 12:55:48.012588 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:48.013050 kubelet[2789]: E1216 12:55:48.012614 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:48.014126 kubelet[2789]: E1216 12:55:48.013657 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:48.014126 kubelet[2789]: W1216 12:55:48.013698 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:48.014126 kubelet[2789]: E1216 12:55:48.013732 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:48.014859 kubelet[2789]: E1216 12:55:48.014801 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:48.014859 kubelet[2789]: W1216 12:55:48.014825 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:48.014859 kubelet[2789]: E1216 12:55:48.014850 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:48.014859 kubelet[2789]: I1216 12:55:48.014925 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7b89c039-0754-43bd-ad85-5506dee48dad-kubelet-dir\") pod \"csi-node-driver-rmjtf\" (UID: \"7b89c039-0754-43bd-ad85-5506dee48dad\") " pod="calico-system/csi-node-driver-rmjtf" Dec 16 12:55:48.016149 kubelet[2789]: E1216 12:55:48.016041 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:48.016149 kubelet[2789]: W1216 12:55:48.016065 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:48.016149 kubelet[2789]: E1216 12:55:48.016086 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:48.017749 kubelet[2789]: E1216 12:55:48.016984 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:48.018045 kubelet[2789]: W1216 12:55:48.017734 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:48.018045 kubelet[2789]: E1216 12:55:48.017791 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:48.018239 kubelet[2789]: E1216 12:55:48.018155 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:48.018454 kubelet[2789]: W1216 12:55:48.018176 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:48.018696 kubelet[2789]: E1216 12:55:48.018474 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:48.018817 kubelet[2789]: I1216 12:55:48.018697 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7b89c039-0754-43bd-ad85-5506dee48dad-socket-dir\") pod \"csi-node-driver-rmjtf\" (UID: \"7b89c039-0754-43bd-ad85-5506dee48dad\") " pod="calico-system/csi-node-driver-rmjtf" Dec 16 12:55:48.019301 kubelet[2789]: E1216 12:55:48.019283 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:48.019474 kubelet[2789]: W1216 12:55:48.019428 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:48.019474 kubelet[2789]: E1216 12:55:48.019454 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:48.019938 kubelet[2789]: E1216 12:55:48.019898 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:48.019938 kubelet[2789]: W1216 12:55:48.019912 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:48.020115 kubelet[2789]: E1216 12:55:48.020082 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:48.020841 kubelet[2789]: E1216 12:55:48.020779 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:48.020841 kubelet[2789]: W1216 12:55:48.020799 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:48.020841 kubelet[2789]: E1216 12:55:48.020812 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:48.021944 kubelet[2789]: E1216 12:55:48.021919 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:48.021944 kubelet[2789]: W1216 12:55:48.021941 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:48.022163 kubelet[2789]: E1216 12:55:48.021961 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:48.022841 kubelet[2789]: E1216 12:55:48.022820 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:48.022841 kubelet[2789]: W1216 12:55:48.022840 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:48.022960 kubelet[2789]: E1216 12:55:48.022859 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:48.055463 kubelet[2789]: E1216 12:55:48.055174 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 12:55:48.056844 containerd[1617]: time="2025-12-16T12:55:48.056809822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-s2m79,Uid:e0e19435-fce2-493e-a320-c227ac114170,Namespace:calico-system,Attempt:0,}" Dec 16 12:55:48.059296 containerd[1617]: time="2025-12-16T12:55:48.059247139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-74886999c5-kss6h,Uid:cae2e9e2-c1c3-4b02-98a6-805058edcd31,Namespace:calico-system,Attempt:0,} returns sandbox id \"9d69be12776bebeacbc72f8e3023aaee8f40290a343d49e931de98dea3da6264\"" Dec 16 12:55:48.060718 kubelet[2789]: E1216 12:55:48.060522 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 12:55:48.062617 containerd[1617]: time="2025-12-16T12:55:48.062396594Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 16 12:55:48.127768 kubelet[2789]: E1216 12:55:48.127322 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:48.127768 kubelet[2789]: W1216 12:55:48.127367 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:48.127768 kubelet[2789]: E1216 12:55:48.127396 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:48.128663 kubelet[2789]: E1216 12:55:48.128413 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:48.128663 kubelet[2789]: W1216 12:55:48.128436 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:48.128663 kubelet[2789]: E1216 12:55:48.128460 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:48.128833 kubelet[2789]: E1216 12:55:48.128774 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:48.128833 kubelet[2789]: W1216 12:55:48.128792 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:48.128833 kubelet[2789]: E1216 12:55:48.128809 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:48.131269 kubelet[2789]: E1216 12:55:48.130125 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:48.131269 kubelet[2789]: W1216 12:55:48.130152 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:48.131269 kubelet[2789]: E1216 12:55:48.130172 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:48.131269 kubelet[2789]: E1216 12:55:48.130348 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:48.131269 kubelet[2789]: W1216 12:55:48.130358 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:48.131269 kubelet[2789]: E1216 12:55:48.130366 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:48.131269 kubelet[2789]: E1216 12:55:48.130605 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:48.131269 kubelet[2789]: W1216 12:55:48.130617 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:48.131269 kubelet[2789]: E1216 12:55:48.130642 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:48.131269 kubelet[2789]: E1216 12:55:48.130822 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:48.132518 kubelet[2789]: W1216 12:55:48.130829 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:48.132518 kubelet[2789]: E1216 12:55:48.130838 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:48.132518 kubelet[2789]: E1216 12:55:48.130990 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:48.132518 kubelet[2789]: W1216 12:55:48.130997 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:48.132518 kubelet[2789]: E1216 12:55:48.131004 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:48.132518 kubelet[2789]: E1216 12:55:48.131177 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:48.132518 kubelet[2789]: W1216 12:55:48.131189 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:48.132518 kubelet[2789]: E1216 12:55:48.131217 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:48.132518 kubelet[2789]: E1216 12:55:48.132300 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:48.132518 kubelet[2789]: W1216 12:55:48.132317 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:48.133938 kubelet[2789]: E1216 12:55:48.132332 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:48.133938 kubelet[2789]: E1216 12:55:48.132498 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:48.133938 kubelet[2789]: W1216 12:55:48.132507 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:48.133938 kubelet[2789]: E1216 12:55:48.132515 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:48.133938 kubelet[2789]: E1216 12:55:48.132741 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:48.133938 kubelet[2789]: W1216 12:55:48.132753 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:48.133938 kubelet[2789]: E1216 12:55:48.132765 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:48.133938 kubelet[2789]: E1216 12:55:48.133004 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:48.133938 kubelet[2789]: W1216 12:55:48.133019 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:48.133938 kubelet[2789]: E1216 12:55:48.133032 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:48.135123 kubelet[2789]: E1216 12:55:48.133764 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:48.135123 kubelet[2789]: W1216 12:55:48.133775 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:48.135123 kubelet[2789]: E1216 12:55:48.133809 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:48.135123 kubelet[2789]: E1216 12:55:48.134037 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:48.135123 kubelet[2789]: W1216 12:55:48.134052 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:48.135123 kubelet[2789]: E1216 12:55:48.134065 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:48.135123 kubelet[2789]: E1216 12:55:48.134755 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:48.135123 kubelet[2789]: W1216 12:55:48.134767 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:48.135123 kubelet[2789]: E1216 12:55:48.134779 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:48.136235 kubelet[2789]: E1216 12:55:48.135464 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:48.136235 kubelet[2789]: W1216 12:55:48.135477 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:48.136235 kubelet[2789]: E1216 12:55:48.135489 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:48.136235 kubelet[2789]: E1216 12:55:48.135659 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:48.136235 kubelet[2789]: W1216 12:55:48.135667 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:48.136235 kubelet[2789]: E1216 12:55:48.135678 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:48.136235 kubelet[2789]: E1216 12:55:48.135826 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:48.136235 kubelet[2789]: W1216 12:55:48.135833 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:48.136235 kubelet[2789]: E1216 12:55:48.135841 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:48.136235 kubelet[2789]: E1216 12:55:48.136075 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:48.137028 kubelet[2789]: W1216 12:55:48.136084 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:48.137028 kubelet[2789]: E1216 12:55:48.136093 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:48.137028 kubelet[2789]: E1216 12:55:48.136772 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:48.137028 kubelet[2789]: W1216 12:55:48.136783 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:48.137028 kubelet[2789]: E1216 12:55:48.136794 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:48.138904 kubelet[2789]: E1216 12:55:48.138098 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:48.138904 kubelet[2789]: W1216 12:55:48.138113 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:48.138904 kubelet[2789]: E1216 12:55:48.138125 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:48.138904 kubelet[2789]: E1216 12:55:48.138322 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:48.138904 kubelet[2789]: W1216 12:55:48.138330 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:48.138904 kubelet[2789]: E1216 12:55:48.138339 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:48.138904 kubelet[2789]: E1216 12:55:48.138488 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:48.138904 kubelet[2789]: W1216 12:55:48.138494 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:48.138904 kubelet[2789]: E1216 12:55:48.138504 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:48.138904 kubelet[2789]: E1216 12:55:48.138699 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:48.139338 kubelet[2789]: W1216 12:55:48.138706 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:48.139338 kubelet[2789]: E1216 12:55:48.138713 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:48.156484 containerd[1617]: time="2025-12-16T12:55:48.155789580Z" level=info msg="connecting to shim 4145a736864d243e1730f7ee86ef11cac83fca4dd4712618963b5abe4c7004fc" address="unix:///run/containerd/s/67e1fbe24b18ee8282af5281b291451a0b5bd89bb33d127ba94afc43d4ed0975" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:55:48.186859 kubelet[2789]: E1216 12:55:48.186762 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:48.186859 kubelet[2789]: W1216 12:55:48.186783 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:48.186859 kubelet[2789]: E1216 12:55:48.186808 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:48.205024 systemd[1]: Started cri-containerd-4145a736864d243e1730f7ee86ef11cac83fca4dd4712618963b5abe4c7004fc.scope - libcontainer container 4145a736864d243e1730f7ee86ef11cac83fca4dd4712618963b5abe4c7004fc. Dec 16 12:55:48.241000 audit: BPF prog-id=159 op=LOAD Dec 16 12:55:48.243000 audit: BPF prog-id=160 op=LOAD Dec 16 12:55:48.243000 audit[3399]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017e238 a2=98 a3=0 items=0 ppid=3387 pid=3399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:48.243000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431343561373336383634643234336531373330663765653836656631 Dec 16 12:55:48.243000 audit: BPF prog-id=160 op=UNLOAD Dec 16 12:55:48.243000 audit[3399]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3387 pid=3399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:48.243000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431343561373336383634643234336531373330663765653836656631 Dec 16 12:55:48.244000 audit: BPF prog-id=161 op=LOAD Dec 16 12:55:48.244000 audit[3399]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017e488 a2=98 a3=0 items=0 ppid=3387 pid=3399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:48.244000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431343561373336383634643234336531373330663765653836656631 Dec 16 12:55:48.244000 audit: BPF prog-id=162 op=LOAD Dec 16 12:55:48.244000 audit[3399]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017e218 a2=98 a3=0 items=0 ppid=3387 pid=3399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:48.244000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431343561373336383634643234336531373330663765653836656631 Dec 16 12:55:48.244000 audit: BPF prog-id=162 op=UNLOAD Dec 16 12:55:48.244000 audit[3399]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3387 pid=3399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:48.244000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431343561373336383634643234336531373330663765653836656631 Dec 16 12:55:48.245000 audit: BPF prog-id=161 op=UNLOAD Dec 16 12:55:48.245000 audit[3399]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3387 pid=3399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:48.245000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431343561373336383634643234336531373330663765653836656631 Dec 16 12:55:48.245000 audit: BPF prog-id=163 op=LOAD Dec 16 12:55:48.245000 audit[3399]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017e6e8 a2=98 a3=0 items=0 ppid=3387 pid=3399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:48.245000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431343561373336383634643234336531373330663765653836656631 Dec 16 12:55:48.280704 containerd[1617]: time="2025-12-16T12:55:48.280023854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-s2m79,Uid:e0e19435-fce2-493e-a320-c227ac114170,Namespace:calico-system,Attempt:0,} returns sandbox id \"4145a736864d243e1730f7ee86ef11cac83fca4dd4712618963b5abe4c7004fc\"" Dec 16 12:55:48.283897 kubelet[2789]: E1216 12:55:48.283429 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 12:55:48.475000 audit[3427]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=3427 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:55:48.475000 audit[3427]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffe12010cf0 a2=0 a3=7ffe12010cdc items=0 ppid=2936 pid=3427 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:48.475000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:55:48.479000 audit[3427]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=3427 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:55:48.479000 audit[3427]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe12010cf0 a2=0 a3=0 items=0 ppid=2936 pid=3427 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:48.479000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:55:49.397686 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2018519575.mount: Deactivated successfully. Dec 16 12:55:49.417177 kubelet[2789]: E1216 12:55:49.417099 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rmjtf" podUID="7b89c039-0754-43bd-ad85-5506dee48dad" Dec 16 12:55:50.356681 containerd[1617]: time="2025-12-16T12:55:50.356567285Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:55:50.358427 containerd[1617]: time="2025-12-16T12:55:50.358368297Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Dec 16 12:55:50.358893 containerd[1617]: time="2025-12-16T12:55:50.358825883Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:55:50.364962 containerd[1617]: time="2025-12-16T12:55:50.364855723Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:55:50.365744 containerd[1617]: time="2025-12-16T12:55:50.365541611Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.30310682s" Dec 16 12:55:50.365744 containerd[1617]: time="2025-12-16T12:55:50.365581372Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Dec 16 12:55:50.369120 containerd[1617]: time="2025-12-16T12:55:50.367898504Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 16 12:55:50.429853 containerd[1617]: time="2025-12-16T12:55:50.428625746Z" level=info msg="CreateContainer within sandbox \"9d69be12776bebeacbc72f8e3023aaee8f40290a343d49e931de98dea3da6264\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 16 12:55:50.437918 containerd[1617]: time="2025-12-16T12:55:50.437857256Z" level=info msg="Container 2b97c64cc3b0b509b7669fa7a62bafe52da4ef73b3dbccc2b7b8184db0ed4e85: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:55:50.446444 containerd[1617]: time="2025-12-16T12:55:50.446377249Z" level=info msg="CreateContainer within sandbox \"9d69be12776bebeacbc72f8e3023aaee8f40290a343d49e931de98dea3da6264\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"2b97c64cc3b0b509b7669fa7a62bafe52da4ef73b3dbccc2b7b8184db0ed4e85\"" Dec 16 12:55:50.448099 containerd[1617]: time="2025-12-16T12:55:50.448014290Z" level=info msg="StartContainer for \"2b97c64cc3b0b509b7669fa7a62bafe52da4ef73b3dbccc2b7b8184db0ed4e85\"" Dec 16 12:55:50.450303 containerd[1617]: time="2025-12-16T12:55:50.450234766Z" level=info msg="connecting to shim 2b97c64cc3b0b509b7669fa7a62bafe52da4ef73b3dbccc2b7b8184db0ed4e85" address="unix:///run/containerd/s/4a6f261e5692491975d723fdf90c9fc42e96c0f5ec6f9800e47a2131f6518f0a" protocol=ttrpc version=3 Dec 16 12:55:50.480006 systemd[1]: Started cri-containerd-2b97c64cc3b0b509b7669fa7a62bafe52da4ef73b3dbccc2b7b8184db0ed4e85.scope - libcontainer container 2b97c64cc3b0b509b7669fa7a62bafe52da4ef73b3dbccc2b7b8184db0ed4e85. Dec 16 12:55:50.504000 audit: BPF prog-id=164 op=LOAD Dec 16 12:55:50.506000 audit: BPF prog-id=165 op=LOAD Dec 16 12:55:50.506000 audit[3439]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=3265 pid=3439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:50.506000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262393763363463633362306235303962373636396661376136326261 Dec 16 12:55:50.506000 audit: BPF prog-id=165 op=UNLOAD Dec 16 12:55:50.506000 audit[3439]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3265 pid=3439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:50.506000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262393763363463633362306235303962373636396661376136326261 Dec 16 12:55:50.506000 audit: BPF prog-id=166 op=LOAD Dec 16 12:55:50.506000 audit[3439]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=3265 pid=3439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:50.506000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262393763363463633362306235303962373636396661376136326261 Dec 16 12:55:50.507000 audit: BPF prog-id=167 op=LOAD Dec 16 12:55:50.507000 audit[3439]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=3265 pid=3439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:50.507000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262393763363463633362306235303962373636396661376136326261 Dec 16 12:55:50.507000 audit: BPF prog-id=167 op=UNLOAD Dec 16 12:55:50.507000 audit[3439]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3265 pid=3439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:50.507000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262393763363463633362306235303962373636396661376136326261 Dec 16 12:55:50.507000 audit: BPF prog-id=166 op=UNLOAD Dec 16 12:55:50.507000 audit[3439]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3265 pid=3439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:50.507000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262393763363463633362306235303962373636396661376136326261 Dec 16 12:55:50.507000 audit: BPF prog-id=168 op=LOAD Dec 16 12:55:50.507000 audit[3439]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=3265 pid=3439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:50.507000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262393763363463633362306235303962373636396661376136326261 Dec 16 12:55:50.571973 containerd[1617]: time="2025-12-16T12:55:50.571927226Z" level=info msg="StartContainer for \"2b97c64cc3b0b509b7669fa7a62bafe52da4ef73b3dbccc2b7b8184db0ed4e85\" returns successfully" Dec 16 12:55:51.417663 kubelet[2789]: E1216 12:55:51.417560 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rmjtf" podUID="7b89c039-0754-43bd-ad85-5506dee48dad" Dec 16 12:55:51.575689 kubelet[2789]: E1216 12:55:51.575609 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 12:55:51.602444 kubelet[2789]: I1216 12:55:51.602225 2789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-74886999c5-kss6h" podStartSLOduration=2.29687418 podStartE2EDuration="4.602189232s" podCreationTimestamp="2025-12-16 12:55:47 +0000 UTC" firstStartedPulling="2025-12-16 12:55:48.061864622 +0000 UTC m=+21.887334029" lastFinishedPulling="2025-12-16 12:55:50.367179672 +0000 UTC m=+24.192649081" observedRunningTime="2025-12-16 12:55:51.600185771 +0000 UTC m=+25.425655206" watchObservedRunningTime="2025-12-16 12:55:51.602189232 +0000 UTC m=+25.427658752" Dec 16 12:55:51.618944 kubelet[2789]: E1216 12:55:51.618851 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:51.618944 kubelet[2789]: W1216 12:55:51.618940 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:51.619319 kubelet[2789]: E1216 12:55:51.618968 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:51.620241 kubelet[2789]: E1216 12:55:51.620211 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:51.620511 kubelet[2789]: W1216 12:55:51.620236 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:51.620511 kubelet[2789]: E1216 12:55:51.620349 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:51.621694 kubelet[2789]: E1216 12:55:51.621665 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:51.621694 kubelet[2789]: W1216 12:55:51.621689 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:51.621873 kubelet[2789]: E1216 12:55:51.621731 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:51.622037 kubelet[2789]: E1216 12:55:51.622012 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:51.622087 kubelet[2789]: W1216 12:55:51.622046 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:51.622087 kubelet[2789]: E1216 12:55:51.622064 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:51.622334 kubelet[2789]: E1216 12:55:51.622313 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:51.622392 kubelet[2789]: W1216 12:55:51.622340 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:51.622392 kubelet[2789]: E1216 12:55:51.622352 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:51.622590 kubelet[2789]: E1216 12:55:51.622571 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:51.622590 kubelet[2789]: W1216 12:55:51.622585 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:51.622729 kubelet[2789]: E1216 12:55:51.622595 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:51.625127 kubelet[2789]: E1216 12:55:51.625080 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:51.625127 kubelet[2789]: W1216 12:55:51.625111 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:51.625342 kubelet[2789]: E1216 12:55:51.625154 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:51.625570 kubelet[2789]: E1216 12:55:51.625547 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:51.625570 kubelet[2789]: W1216 12:55:51.625566 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:51.625712 kubelet[2789]: E1216 12:55:51.625585 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:51.625961 kubelet[2789]: E1216 12:55:51.625940 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:51.625961 kubelet[2789]: W1216 12:55:51.625958 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:51.626090 kubelet[2789]: E1216 12:55:51.625974 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:51.626293 kubelet[2789]: E1216 12:55:51.626273 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:51.626293 kubelet[2789]: W1216 12:55:51.626289 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:51.626415 kubelet[2789]: E1216 12:55:51.626304 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:51.626608 kubelet[2789]: E1216 12:55:51.626588 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:51.626608 kubelet[2789]: W1216 12:55:51.626604 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:51.626771 kubelet[2789]: E1216 12:55:51.626622 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:51.626979 kubelet[2789]: E1216 12:55:51.626959 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:51.626979 kubelet[2789]: W1216 12:55:51.626975 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:51.627095 kubelet[2789]: E1216 12:55:51.626989 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:51.627326 kubelet[2789]: E1216 12:55:51.627306 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:51.627326 kubelet[2789]: W1216 12:55:51.627322 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:51.627326 kubelet[2789]: E1216 12:55:51.627336 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:51.627581 kubelet[2789]: E1216 12:55:51.627562 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:51.627581 kubelet[2789]: W1216 12:55:51.627577 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:51.627913 kubelet[2789]: E1216 12:55:51.627591 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:51.629772 kubelet[2789]: E1216 12:55:51.629733 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:51.629772 kubelet[2789]: W1216 12:55:51.629762 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:51.629939 kubelet[2789]: E1216 12:55:51.629790 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:51.658664 kubelet[2789]: E1216 12:55:51.658606 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:51.658664 kubelet[2789]: W1216 12:55:51.658655 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:51.660016 kubelet[2789]: E1216 12:55:51.658685 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:51.660016 kubelet[2789]: E1216 12:55:51.659157 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:51.660016 kubelet[2789]: W1216 12:55:51.659167 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:51.660016 kubelet[2789]: E1216 12:55:51.659178 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:51.661271 kubelet[2789]: E1216 12:55:51.661207 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:51.661271 kubelet[2789]: W1216 12:55:51.661229 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:51.661271 kubelet[2789]: E1216 12:55:51.661251 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:51.662661 kubelet[2789]: E1216 12:55:51.662064 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:51.662661 kubelet[2789]: W1216 12:55:51.662081 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:51.662661 kubelet[2789]: E1216 12:55:51.662099 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:51.663372 kubelet[2789]: E1216 12:55:51.663203 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:51.663824 kubelet[2789]: W1216 12:55:51.663219 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:51.663824 kubelet[2789]: E1216 12:55:51.663801 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:51.665474 kubelet[2789]: E1216 12:55:51.664799 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:51.665474 kubelet[2789]: W1216 12:55:51.665307 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:51.665474 kubelet[2789]: E1216 12:55:51.665328 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:51.666811 kubelet[2789]: E1216 12:55:51.666794 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:51.666881 kubelet[2789]: W1216 12:55:51.666868 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:51.667402 kubelet[2789]: E1216 12:55:51.667298 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:51.668325 kubelet[2789]: E1216 12:55:51.668249 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:51.668914 kubelet[2789]: W1216 12:55:51.668686 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:51.668914 kubelet[2789]: E1216 12:55:51.668718 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:51.671431 kubelet[2789]: E1216 12:55:51.670970 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:51.671431 kubelet[2789]: W1216 12:55:51.670995 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:51.671431 kubelet[2789]: E1216 12:55:51.671019 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:51.672585 kubelet[2789]: E1216 12:55:51.672196 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:51.672585 kubelet[2789]: W1216 12:55:51.672217 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:51.672585 kubelet[2789]: E1216 12:55:51.672239 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:51.673819 kubelet[2789]: E1216 12:55:51.673719 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:51.673819 kubelet[2789]: W1216 12:55:51.673738 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:51.673819 kubelet[2789]: E1216 12:55:51.673760 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:51.675114 kubelet[2789]: E1216 12:55:51.675070 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:51.675415 kubelet[2789]: W1216 12:55:51.675282 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:51.675415 kubelet[2789]: E1216 12:55:51.675307 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:51.676334 kubelet[2789]: E1216 12:55:51.676233 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:51.676555 kubelet[2789]: W1216 12:55:51.676537 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:51.676910 kubelet[2789]: E1216 12:55:51.676676 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:51.676000 audit[3507]: NETFILTER_CFG table=filter:119 family=2 entries=21 op=nft_register_rule pid=3507 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:55:51.676000 audit[3507]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe21c650d0 a2=0 a3=7ffe21c650bc items=0 ppid=2936 pid=3507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:51.676000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:55:51.679697 kubelet[2789]: E1216 12:55:51.678048 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:51.679697 kubelet[2789]: W1216 12:55:51.678065 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:51.679697 kubelet[2789]: E1216 12:55:51.678084 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:51.681418 kubelet[2789]: E1216 12:55:51.681272 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:51.682008 kubelet[2789]: W1216 12:55:51.681965 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:51.683955 kubelet[2789]: E1216 12:55:51.682154 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:51.683000 audit[3507]: NETFILTER_CFG table=nat:120 family=2 entries=19 op=nft_register_chain pid=3507 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:55:51.683000 audit[3507]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffe21c650d0 a2=0 a3=7ffe21c650bc items=0 ppid=2936 pid=3507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:51.683000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:55:51.686935 kubelet[2789]: E1216 12:55:51.684721 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:51.693754 kubelet[2789]: W1216 12:55:51.693688 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:51.711313 kubelet[2789]: E1216 12:55:51.710186 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:51.721255 kubelet[2789]: E1216 12:55:51.719190 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:51.721255 kubelet[2789]: W1216 12:55:51.719296 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:51.721255 kubelet[2789]: E1216 12:55:51.719325 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:51.739549 kubelet[2789]: E1216 12:55:51.739492 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:55:51.739549 kubelet[2789]: W1216 12:55:51.739530 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:55:51.739826 kubelet[2789]: E1216 12:55:51.739579 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:55:51.848062 containerd[1617]: time="2025-12-16T12:55:51.847882442Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:55:51.849672 containerd[1617]: time="2025-12-16T12:55:51.849597741Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:55:51.849827 containerd[1617]: time="2025-12-16T12:55:51.849811230Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4442579" Dec 16 12:55:51.856208 containerd[1617]: time="2025-12-16T12:55:51.856158844Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:55:51.857187 containerd[1617]: time="2025-12-16T12:55:51.857154645Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.488106901s" Dec 16 12:55:51.857300 containerd[1617]: time="2025-12-16T12:55:51.857281243Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Dec 16 12:55:51.863384 containerd[1617]: time="2025-12-16T12:55:51.863332304Z" level=info msg="CreateContainer within sandbox \"4145a736864d243e1730f7ee86ef11cac83fca4dd4712618963b5abe4c7004fc\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 16 12:55:51.879945 containerd[1617]: time="2025-12-16T12:55:51.879901237Z" level=info msg="Container 44353580d48f0d2923eb2bd9ca23ac48614f3cfc787e27675b3a79b623fcdcd3: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:55:51.889880 containerd[1617]: time="2025-12-16T12:55:51.889831041Z" level=info msg="CreateContainer within sandbox \"4145a736864d243e1730f7ee86ef11cac83fca4dd4712618963b5abe4c7004fc\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"44353580d48f0d2923eb2bd9ca23ac48614f3cfc787e27675b3a79b623fcdcd3\"" Dec 16 12:55:51.891226 containerd[1617]: time="2025-12-16T12:55:51.890658622Z" level=info msg="StartContainer for \"44353580d48f0d2923eb2bd9ca23ac48614f3cfc787e27675b3a79b623fcdcd3\"" Dec 16 12:55:51.892273 containerd[1617]: time="2025-12-16T12:55:51.892205078Z" level=info msg="connecting to shim 44353580d48f0d2923eb2bd9ca23ac48614f3cfc787e27675b3a79b623fcdcd3" address="unix:///run/containerd/s/67e1fbe24b18ee8282af5281b291451a0b5bd89bb33d127ba94afc43d4ed0975" protocol=ttrpc version=3 Dec 16 12:55:51.926944 systemd[1]: Started cri-containerd-44353580d48f0d2923eb2bd9ca23ac48614f3cfc787e27675b3a79b623fcdcd3.scope - libcontainer container 44353580d48f0d2923eb2bd9ca23ac48614f3cfc787e27675b3a79b623fcdcd3. Dec 16 12:55:51.982000 audit: BPF prog-id=169 op=LOAD Dec 16 12:55:51.982000 audit[3520]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3387 pid=3520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:51.982000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434333533353830643438663064323932336562326264396361323361 Dec 16 12:55:51.982000 audit: BPF prog-id=170 op=LOAD Dec 16 12:55:51.982000 audit[3520]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3387 pid=3520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:51.982000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434333533353830643438663064323932336562326264396361323361 Dec 16 12:55:51.983000 audit: BPF prog-id=170 op=UNLOAD Dec 16 12:55:51.983000 audit[3520]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3387 pid=3520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:51.983000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434333533353830643438663064323932336562326264396361323361 Dec 16 12:55:51.983000 audit: BPF prog-id=169 op=UNLOAD Dec 16 12:55:51.983000 audit[3520]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3387 pid=3520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:51.983000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434333533353830643438663064323932336562326264396361323361 Dec 16 12:55:51.983000 audit: BPF prog-id=171 op=LOAD Dec 16 12:55:51.983000 audit[3520]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3387 pid=3520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:51.983000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434333533353830643438663064323932336562326264396361323361 Dec 16 12:55:52.015806 containerd[1617]: time="2025-12-16T12:55:52.014122602Z" level=info msg="StartContainer for \"44353580d48f0d2923eb2bd9ca23ac48614f3cfc787e27675b3a79b623fcdcd3\" returns successfully" Dec 16 12:55:52.045806 systemd[1]: cri-containerd-44353580d48f0d2923eb2bd9ca23ac48614f3cfc787e27675b3a79b623fcdcd3.scope: Deactivated successfully. Dec 16 12:55:52.048000 audit: BPF prog-id=171 op=UNLOAD Dec 16 12:55:52.063469 containerd[1617]: time="2025-12-16T12:55:52.063394235Z" level=info msg="received container exit event container_id:\"44353580d48f0d2923eb2bd9ca23ac48614f3cfc787e27675b3a79b623fcdcd3\" id:\"44353580d48f0d2923eb2bd9ca23ac48614f3cfc787e27675b3a79b623fcdcd3\" pid:3532 exited_at:{seconds:1765889752 nanos:52195170}" Dec 16 12:55:52.108233 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-44353580d48f0d2923eb2bd9ca23ac48614f3cfc787e27675b3a79b623fcdcd3-rootfs.mount: Deactivated successfully. Dec 16 12:55:52.579883 kubelet[2789]: E1216 12:55:52.579836 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 12:55:52.584825 kubelet[2789]: E1216 12:55:52.584735 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 12:55:52.586751 containerd[1617]: time="2025-12-16T12:55:52.586679276Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 16 12:55:53.417541 kubelet[2789]: E1216 12:55:53.417453 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rmjtf" podUID="7b89c039-0754-43bd-ad85-5506dee48dad" Dec 16 12:55:53.581969 kubelet[2789]: E1216 12:55:53.581922 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 12:55:55.419918 kubelet[2789]: E1216 12:55:55.419093 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rmjtf" podUID="7b89c039-0754-43bd-ad85-5506dee48dad" Dec 16 12:55:56.520953 containerd[1617]: time="2025-12-16T12:55:56.520854797Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:55:56.522207 containerd[1617]: time="2025-12-16T12:55:56.521931507Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Dec 16 12:55:56.523353 containerd[1617]: time="2025-12-16T12:55:56.523287352Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:55:56.526681 containerd[1617]: time="2025-12-16T12:55:56.526494486Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:55:56.527703 containerd[1617]: time="2025-12-16T12:55:56.527622782Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.940510888s" Dec 16 12:55:56.527703 containerd[1617]: time="2025-12-16T12:55:56.527690587Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Dec 16 12:55:56.535670 containerd[1617]: time="2025-12-16T12:55:56.535602908Z" level=info msg="CreateContainer within sandbox \"4145a736864d243e1730f7ee86ef11cac83fca4dd4712618963b5abe4c7004fc\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 16 12:55:56.577845 containerd[1617]: time="2025-12-16T12:55:56.575904211Z" level=info msg="Container b5df04546bf7f7020e420f8d0c00668994e6408152641eedfeadb6663043f82c: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:55:56.595526 containerd[1617]: time="2025-12-16T12:55:56.595483995Z" level=info msg="CreateContainer within sandbox \"4145a736864d243e1730f7ee86ef11cac83fca4dd4712618963b5abe4c7004fc\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b5df04546bf7f7020e420f8d0c00668994e6408152641eedfeadb6663043f82c\"" Dec 16 12:55:56.596611 containerd[1617]: time="2025-12-16T12:55:56.596581557Z" level=info msg="StartContainer for \"b5df04546bf7f7020e420f8d0c00668994e6408152641eedfeadb6663043f82c\"" Dec 16 12:55:56.599457 containerd[1617]: time="2025-12-16T12:55:56.599421738Z" level=info msg="connecting to shim b5df04546bf7f7020e420f8d0c00668994e6408152641eedfeadb6663043f82c" address="unix:///run/containerd/s/67e1fbe24b18ee8282af5281b291451a0b5bd89bb33d127ba94afc43d4ed0975" protocol=ttrpc version=3 Dec 16 12:55:56.629000 systemd[1]: Started cri-containerd-b5df04546bf7f7020e420f8d0c00668994e6408152641eedfeadb6663043f82c.scope - libcontainer container b5df04546bf7f7020e420f8d0c00668994e6408152641eedfeadb6663043f82c. Dec 16 12:55:56.687000 audit: BPF prog-id=172 op=LOAD Dec 16 12:55:56.689860 kernel: kauditd_printk_skb: 84 callbacks suppressed Dec 16 12:55:56.689913 kernel: audit: type=1334 audit(1765889756.687:574): prog-id=172 op=LOAD Dec 16 12:55:56.687000 audit[3575]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3387 pid=3575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:56.687000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235646630343534366266376637303230653432306638643063303036 Dec 16 12:55:56.697738 kernel: audit: type=1300 audit(1765889756.687:574): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3387 pid=3575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:56.697861 kernel: audit: type=1327 audit(1765889756.687:574): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235646630343534366266376637303230653432306638643063303036 Dec 16 12:55:56.687000 audit: BPF prog-id=173 op=LOAD Dec 16 12:55:56.701160 kernel: audit: type=1334 audit(1765889756.687:575): prog-id=173 op=LOAD Dec 16 12:55:56.687000 audit[3575]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3387 pid=3575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:56.703407 kernel: audit: type=1300 audit(1765889756.687:575): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3387 pid=3575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:56.687000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235646630343534366266376637303230653432306638643063303036 Dec 16 12:55:56.713681 kernel: audit: type=1327 audit(1765889756.687:575): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235646630343534366266376637303230653432306638643063303036 Dec 16 12:55:56.687000 audit: BPF prog-id=173 op=UNLOAD Dec 16 12:55:56.718668 kernel: audit: type=1334 audit(1765889756.687:576): prog-id=173 op=UNLOAD Dec 16 12:55:56.718806 kernel: audit: type=1300 audit(1765889756.687:576): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3387 pid=3575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:56.687000 audit[3575]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3387 pid=3575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:56.687000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235646630343534366266376637303230653432306638643063303036 Dec 16 12:55:56.721841 kernel: audit: type=1327 audit(1765889756.687:576): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235646630343534366266376637303230653432306638643063303036 Dec 16 12:55:56.687000 audit: BPF prog-id=172 op=UNLOAD Dec 16 12:55:56.725070 kernel: audit: type=1334 audit(1765889756.687:577): prog-id=172 op=UNLOAD Dec 16 12:55:56.687000 audit[3575]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3387 pid=3575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:56.687000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235646630343534366266376637303230653432306638643063303036 Dec 16 12:55:56.687000 audit: BPF prog-id=174 op=LOAD Dec 16 12:55:56.687000 audit[3575]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3387 pid=3575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:55:56.687000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235646630343534366266376637303230653432306638643063303036 Dec 16 12:55:56.765814 containerd[1617]: time="2025-12-16T12:55:56.765759333Z" level=info msg="StartContainer for \"b5df04546bf7f7020e420f8d0c00668994e6408152641eedfeadb6663043f82c\" returns successfully" Dec 16 12:55:57.397064 systemd[1]: cri-containerd-b5df04546bf7f7020e420f8d0c00668994e6408152641eedfeadb6663043f82c.scope: Deactivated successfully. Dec 16 12:55:57.397951 systemd[1]: cri-containerd-b5df04546bf7f7020e420f8d0c00668994e6408152641eedfeadb6663043f82c.scope: Consumed 595ms CPU time, 167.1M memory peak, 11.3M read from disk, 171.3M written to disk. Dec 16 12:55:57.400000 audit: BPF prog-id=174 op=UNLOAD Dec 16 12:55:57.419985 kubelet[2789]: E1216 12:55:57.419914 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rmjtf" podUID="7b89c039-0754-43bd-ad85-5506dee48dad" Dec 16 12:55:57.472664 kubelet[2789]: I1216 12:55:57.471452 2789 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 16 12:55:57.486203 containerd[1617]: time="2025-12-16T12:55:57.485640225Z" level=info msg="received container exit event container_id:\"b5df04546bf7f7020e420f8d0c00668994e6408152641eedfeadb6663043f82c\" id:\"b5df04546bf7f7020e420f8d0c00668994e6408152641eedfeadb6663043f82c\" pid:3588 exited_at:{seconds:1765889757 nanos:483685590}" Dec 16 12:55:57.590273 systemd[1]: Created slice kubepods-burstable-pod564acc2b_9d61_41a8_ac66_0231a2d37863.slice - libcontainer container kubepods-burstable-pod564acc2b_9d61_41a8_ac66_0231a2d37863.slice. Dec 16 12:55:57.611295 kubelet[2789]: I1216 12:55:57.611248 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrzn7\" (UniqueName: \"kubernetes.io/projected/eceb5ead-85dc-4ae8-98b5-b55994dab5ce-kube-api-access-qrzn7\") pod \"calico-kube-controllers-7b46df5bf6-8vt25\" (UID: \"eceb5ead-85dc-4ae8-98b5-b55994dab5ce\") " pod="calico-system/calico-kube-controllers-7b46df5bf6-8vt25" Dec 16 12:55:57.612271 kubelet[2789]: I1216 12:55:57.612233 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/575a9ccb-ab86-412f-ad4a-68e90d59046f-whisker-ca-bundle\") pod \"whisker-9b78f4b84-5zpvr\" (UID: \"575a9ccb-ab86-412f-ad4a-68e90d59046f\") " pod="calico-system/whisker-9b78f4b84-5zpvr" Dec 16 12:55:57.612509 kubelet[2789]: I1216 12:55:57.612487 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xx689\" (UniqueName: \"kubernetes.io/projected/564acc2b-9d61-41a8-ac66-0231a2d37863-kube-api-access-xx689\") pod \"coredns-674b8bbfcf-jvzjz\" (UID: \"564acc2b-9d61-41a8-ac66-0231a2d37863\") " pod="kube-system/coredns-674b8bbfcf-jvzjz" Dec 16 12:55:57.612673 kubelet[2789]: I1216 12:55:57.612653 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frvjm\" (UniqueName: \"kubernetes.io/projected/575a9ccb-ab86-412f-ad4a-68e90d59046f-kube-api-access-frvjm\") pod \"whisker-9b78f4b84-5zpvr\" (UID: \"575a9ccb-ab86-412f-ad4a-68e90d59046f\") " pod="calico-system/whisker-9b78f4b84-5zpvr" Dec 16 12:55:57.613655 kubelet[2789]: I1216 12:55:57.612797 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eceb5ead-85dc-4ae8-98b5-b55994dab5ce-tigera-ca-bundle\") pod \"calico-kube-controllers-7b46df5bf6-8vt25\" (UID: \"eceb5ead-85dc-4ae8-98b5-b55994dab5ce\") " pod="calico-system/calico-kube-controllers-7b46df5bf6-8vt25" Dec 16 12:55:57.613655 kubelet[2789]: I1216 12:55:57.612832 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/575a9ccb-ab86-412f-ad4a-68e90d59046f-whisker-backend-key-pair\") pod \"whisker-9b78f4b84-5zpvr\" (UID: \"575a9ccb-ab86-412f-ad4a-68e90d59046f\") " pod="calico-system/whisker-9b78f4b84-5zpvr" Dec 16 12:55:57.613655 kubelet[2789]: I1216 12:55:57.612858 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/564acc2b-9d61-41a8-ac66-0231a2d37863-config-volume\") pod \"coredns-674b8bbfcf-jvzjz\" (UID: \"564acc2b-9d61-41a8-ac66-0231a2d37863\") " pod="kube-system/coredns-674b8bbfcf-jvzjz" Dec 16 12:55:57.626314 systemd[1]: Created slice kubepods-besteffort-pod575a9ccb_ab86_412f_ad4a_68e90d59046f.slice - libcontainer container kubepods-besteffort-pod575a9ccb_ab86_412f_ad4a_68e90d59046f.slice. Dec 16 12:55:57.650335 kubelet[2789]: E1216 12:55:57.647525 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 12:55:57.648873 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b5df04546bf7f7020e420f8d0c00668994e6408152641eedfeadb6663043f82c-rootfs.mount: Deactivated successfully. Dec 16 12:55:57.657939 systemd[1]: Created slice kubepods-besteffort-pod4d5fd089_d56c_460c_b006_cc36a126ec32.slice - libcontainer container kubepods-besteffort-pod4d5fd089_d56c_460c_b006_cc36a126ec32.slice. Dec 16 12:55:57.671509 systemd[1]: Created slice kubepods-besteffort-pod5a9eaa6d_ffc5_496a_b44d_d7e196b6b18c.slice - libcontainer container kubepods-besteffort-pod5a9eaa6d_ffc5_496a_b44d_d7e196b6b18c.slice. Dec 16 12:55:57.685022 systemd[1]: Created slice kubepods-besteffort-podeceb5ead_85dc_4ae8_98b5_b55994dab5ce.slice - libcontainer container kubepods-besteffort-podeceb5ead_85dc_4ae8_98b5_b55994dab5ce.slice. Dec 16 12:55:57.697406 systemd[1]: Created slice kubepods-burstable-pod17667d96_fec2_4c58_952d_8aee4c298c11.slice - libcontainer container kubepods-burstable-pod17667d96_fec2_4c58_952d_8aee4c298c11.slice. Dec 16 12:55:57.710483 systemd[1]: Created slice kubepods-besteffort-pod9d74f6fb_d2c4_41ff_9241_88dfaec31538.slice - libcontainer container kubepods-besteffort-pod9d74f6fb_d2c4_41ff_9241_88dfaec31538.slice. Dec 16 12:55:57.713736 kubelet[2789]: I1216 12:55:57.713333 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/9d74f6fb-d2c4-41ff-9241-88dfaec31538-goldmane-key-pair\") pod \"goldmane-666569f655-hvm4l\" (UID: \"9d74f6fb-d2c4-41ff-9241-88dfaec31538\") " pod="calico-system/goldmane-666569f655-hvm4l" Dec 16 12:55:57.716987 kubelet[2789]: I1216 12:55:57.716929 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4d5fd089-d56c-460c-b006-cc36a126ec32-calico-apiserver-certs\") pod \"calico-apiserver-5f6f6f8cd5-nxqwt\" (UID: \"4d5fd089-d56c-460c-b006-cc36a126ec32\") " pod="calico-apiserver/calico-apiserver-5f6f6f8cd5-nxqwt" Dec 16 12:55:57.717524 kubelet[2789]: I1216 12:55:57.717391 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d74f6fb-d2c4-41ff-9241-88dfaec31538-config\") pod \"goldmane-666569f655-hvm4l\" (UID: \"9d74f6fb-d2c4-41ff-9241-88dfaec31538\") " pod="calico-system/goldmane-666569f655-hvm4l" Dec 16 12:55:57.717681 kubelet[2789]: I1216 12:55:57.717534 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwjlh\" (UniqueName: \"kubernetes.io/projected/9d74f6fb-d2c4-41ff-9241-88dfaec31538-kube-api-access-vwjlh\") pod \"goldmane-666569f655-hvm4l\" (UID: \"9d74f6fb-d2c4-41ff-9241-88dfaec31538\") " pod="calico-system/goldmane-666569f655-hvm4l" Dec 16 12:55:57.717752 kubelet[2789]: I1216 12:55:57.717673 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6wnn\" (UniqueName: \"kubernetes.io/projected/4d5fd089-d56c-460c-b006-cc36a126ec32-kube-api-access-l6wnn\") pod \"calico-apiserver-5f6f6f8cd5-nxqwt\" (UID: \"4d5fd089-d56c-460c-b006-cc36a126ec32\") " pod="calico-apiserver/calico-apiserver-5f6f6f8cd5-nxqwt" Dec 16 12:55:57.718575 kubelet[2789]: I1216 12:55:57.718538 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5a9eaa6d-ffc5-496a-b44d-d7e196b6b18c-calico-apiserver-certs\") pod \"calico-apiserver-5f6f6f8cd5-qx8fz\" (UID: \"5a9eaa6d-ffc5-496a-b44d-d7e196b6b18c\") " pod="calico-apiserver/calico-apiserver-5f6f6f8cd5-qx8fz" Dec 16 12:55:57.718688 kubelet[2789]: I1216 12:55:57.718598 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plx6k\" (UniqueName: \"kubernetes.io/projected/5a9eaa6d-ffc5-496a-b44d-d7e196b6b18c-kube-api-access-plx6k\") pod \"calico-apiserver-5f6f6f8cd5-qx8fz\" (UID: \"5a9eaa6d-ffc5-496a-b44d-d7e196b6b18c\") " pod="calico-apiserver/calico-apiserver-5f6f6f8cd5-qx8fz" Dec 16 12:55:57.718688 kubelet[2789]: I1216 12:55:57.718624 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d74f6fb-d2c4-41ff-9241-88dfaec31538-goldmane-ca-bundle\") pod \"goldmane-666569f655-hvm4l\" (UID: \"9d74f6fb-d2c4-41ff-9241-88dfaec31538\") " pod="calico-system/goldmane-666569f655-hvm4l" Dec 16 12:55:57.718796 kubelet[2789]: I1216 12:55:57.718686 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/17667d96-fec2-4c58-952d-8aee4c298c11-config-volume\") pod \"coredns-674b8bbfcf-m8248\" (UID: \"17667d96-fec2-4c58-952d-8aee4c298c11\") " pod="kube-system/coredns-674b8bbfcf-m8248" Dec 16 12:55:57.718796 kubelet[2789]: I1216 12:55:57.718735 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dh5qq\" (UniqueName: \"kubernetes.io/projected/17667d96-fec2-4c58-952d-8aee4c298c11-kube-api-access-dh5qq\") pod \"coredns-674b8bbfcf-m8248\" (UID: \"17667d96-fec2-4c58-952d-8aee4c298c11\") " pod="kube-system/coredns-674b8bbfcf-m8248" Dec 16 12:55:57.915884 kubelet[2789]: E1216 12:55:57.915610 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 12:55:57.918074 containerd[1617]: time="2025-12-16T12:55:57.918012115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jvzjz,Uid:564acc2b-9d61-41a8-ac66-0231a2d37863,Namespace:kube-system,Attempt:0,}" Dec 16 12:55:57.936029 containerd[1617]: time="2025-12-16T12:55:57.935912148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-9b78f4b84-5zpvr,Uid:575a9ccb-ab86-412f-ad4a-68e90d59046f,Namespace:calico-system,Attempt:0,}" Dec 16 12:55:57.970208 containerd[1617]: time="2025-12-16T12:55:57.970106918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f6f6f8cd5-nxqwt,Uid:4d5fd089-d56c-460c-b006-cc36a126ec32,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:55:57.983393 containerd[1617]: time="2025-12-16T12:55:57.983255382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f6f6f8cd5-qx8fz,Uid:5a9eaa6d-ffc5-496a-b44d-d7e196b6b18c,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:55:58.006177 kubelet[2789]: E1216 12:55:58.005138 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 12:55:58.010567 containerd[1617]: time="2025-12-16T12:55:58.010089940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-m8248,Uid:17667d96-fec2-4c58-952d-8aee4c298c11,Namespace:kube-system,Attempt:0,}" Dec 16 12:55:58.013452 containerd[1617]: time="2025-12-16T12:55:58.013376632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b46df5bf6-8vt25,Uid:eceb5ead-85dc-4ae8-98b5-b55994dab5ce,Namespace:calico-system,Attempt:0,}" Dec 16 12:55:58.040088 containerd[1617]: time="2025-12-16T12:55:58.040011873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-hvm4l,Uid:9d74f6fb-d2c4-41ff-9241-88dfaec31538,Namespace:calico-system,Attempt:0,}" Dec 16 12:55:58.310929 containerd[1617]: time="2025-12-16T12:55:58.310788319Z" level=error msg="Failed to destroy network for sandbox \"52561c0f0bd11378725295c55e531a32deaa0fd914a3738812c092c08a1cffa9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:55:58.351621 containerd[1617]: time="2025-12-16T12:55:58.317959024Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-m8248,Uid:17667d96-fec2-4c58-952d-8aee4c298c11,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"52561c0f0bd11378725295c55e531a32deaa0fd914a3738812c092c08a1cffa9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:55:58.353502 containerd[1617]: time="2025-12-16T12:55:58.329896765Z" level=error msg="Failed to destroy network for sandbox \"e689fb065556222e51e4a057069737359c77ca4050aeb160cf54ca13cb990778\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:55:58.360074 containerd[1617]: time="2025-12-16T12:55:58.360003528Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-hvm4l,Uid:9d74f6fb-d2c4-41ff-9241-88dfaec31538,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e689fb065556222e51e4a057069737359c77ca4050aeb160cf54ca13cb990778\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:55:58.361762 containerd[1617]: time="2025-12-16T12:55:58.331936095Z" level=error msg="Failed to destroy network for sandbox \"8003ce78b9222ac5f80b3f1386d20667786facda7137813fec0f6f61017bb600\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:55:58.367334 containerd[1617]: time="2025-12-16T12:55:58.367250706Z" level=error msg="Failed to destroy network for sandbox \"954ea162b03811d6318efd088b92e552f5aa44f3d43ff50ce8e5d6fea7097a33\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:55:58.368531 kubelet[2789]: E1216 12:55:58.368454 2789 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e689fb065556222e51e4a057069737359c77ca4050aeb160cf54ca13cb990778\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:55:58.368810 kubelet[2789]: E1216 12:55:58.368578 2789 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e689fb065556222e51e4a057069737359c77ca4050aeb160cf54ca13cb990778\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-hvm4l" Dec 16 12:55:58.368810 kubelet[2789]: E1216 12:55:58.368603 2789 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e689fb065556222e51e4a057069737359c77ca4050aeb160cf54ca13cb990778\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-hvm4l" Dec 16 12:55:58.369497 kubelet[2789]: E1216 12:55:58.369450 2789 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52561c0f0bd11378725295c55e531a32deaa0fd914a3738812c092c08a1cffa9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:55:58.369591 kubelet[2789]: E1216 12:55:58.369527 2789 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52561c0f0bd11378725295c55e531a32deaa0fd914a3738812c092c08a1cffa9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-m8248" Dec 16 12:55:58.369591 kubelet[2789]: E1216 12:55:58.369559 2789 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52561c0f0bd11378725295c55e531a32deaa0fd914a3738812c092c08a1cffa9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-m8248" Dec 16 12:55:58.369667 kubelet[2789]: E1216 12:55:58.369621 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-m8248_kube-system(17667d96-fec2-4c58-952d-8aee4c298c11)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-m8248_kube-system(17667d96-fec2-4c58-952d-8aee4c298c11)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"52561c0f0bd11378725295c55e531a32deaa0fd914a3738812c092c08a1cffa9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-m8248" podUID="17667d96-fec2-4c58-952d-8aee4c298c11" Dec 16 12:55:58.369913 containerd[1617]: time="2025-12-16T12:55:58.369859582Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f6f6f8cd5-nxqwt,Uid:4d5fd089-d56c-460c-b006-cc36a126ec32,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8003ce78b9222ac5f80b3f1386d20667786facda7137813fec0f6f61017bb600\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:55:58.370967 kubelet[2789]: E1216 12:55:58.370112 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-hvm4l_calico-system(9d74f6fb-d2c4-41ff-9241-88dfaec31538)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-hvm4l_calico-system(9d74f6fb-d2c4-41ff-9241-88dfaec31538)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e689fb065556222e51e4a057069737359c77ca4050aeb160cf54ca13cb990778\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-hvm4l" podUID="9d74f6fb-d2c4-41ff-9241-88dfaec31538" Dec 16 12:55:58.371580 containerd[1617]: time="2025-12-16T12:55:58.371291227Z" level=error msg="Failed to destroy network for sandbox \"1e3a5554c0fc93912c7fa86a71a9fd8b3d9b0c97f9c8c3005e5c0430c3c7a8d7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:55:58.372483 kubelet[2789]: E1216 12:55:58.372359 2789 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8003ce78b9222ac5f80b3f1386d20667786facda7137813fec0f6f61017bb600\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:55:58.373112 kubelet[2789]: E1216 12:55:58.372589 2789 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8003ce78b9222ac5f80b3f1386d20667786facda7137813fec0f6f61017bb600\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f6f6f8cd5-nxqwt" Dec 16 12:55:58.373112 kubelet[2789]: E1216 12:55:58.372615 2789 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8003ce78b9222ac5f80b3f1386d20667786facda7137813fec0f6f61017bb600\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f6f6f8cd5-nxqwt" Dec 16 12:55:58.373112 kubelet[2789]: E1216 12:55:58.372701 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5f6f6f8cd5-nxqwt_calico-apiserver(4d5fd089-d56c-460c-b006-cc36a126ec32)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5f6f6f8cd5-nxqwt_calico-apiserver(4d5fd089-d56c-460c-b006-cc36a126ec32)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8003ce78b9222ac5f80b3f1386d20667786facda7137813fec0f6f61017bb600\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f6f6f8cd5-nxqwt" podUID="4d5fd089-d56c-460c-b006-cc36a126ec32" Dec 16 12:55:58.381232 containerd[1617]: time="2025-12-16T12:55:58.381115744Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f6f6f8cd5-qx8fz,Uid:5a9eaa6d-ffc5-496a-b44d-d7e196b6b18c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"954ea162b03811d6318efd088b92e552f5aa44f3d43ff50ce8e5d6fea7097a33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:55:58.382331 kubelet[2789]: E1216 12:55:58.382049 2789 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"954ea162b03811d6318efd088b92e552f5aa44f3d43ff50ce8e5d6fea7097a33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:55:58.382331 kubelet[2789]: E1216 12:55:58.382147 2789 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"954ea162b03811d6318efd088b92e552f5aa44f3d43ff50ce8e5d6fea7097a33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f6f6f8cd5-qx8fz" Dec 16 12:55:58.382331 kubelet[2789]: E1216 12:55:58.382195 2789 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"954ea162b03811d6318efd088b92e552f5aa44f3d43ff50ce8e5d6fea7097a33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f6f6f8cd5-qx8fz" Dec 16 12:55:58.382786 kubelet[2789]: E1216 12:55:58.382301 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5f6f6f8cd5-qx8fz_calico-apiserver(5a9eaa6d-ffc5-496a-b44d-d7e196b6b18c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5f6f6f8cd5-qx8fz_calico-apiserver(5a9eaa6d-ffc5-496a-b44d-d7e196b6b18c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"954ea162b03811d6318efd088b92e552f5aa44f3d43ff50ce8e5d6fea7097a33\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f6f6f8cd5-qx8fz" podUID="5a9eaa6d-ffc5-496a-b44d-d7e196b6b18c" Dec 16 12:55:58.385815 containerd[1617]: time="2025-12-16T12:55:58.385538502Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b46df5bf6-8vt25,Uid:eceb5ead-85dc-4ae8-98b5-b55994dab5ce,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e3a5554c0fc93912c7fa86a71a9fd8b3d9b0c97f9c8c3005e5c0430c3c7a8d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:55:58.388601 kubelet[2789]: E1216 12:55:58.386576 2789 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e3a5554c0fc93912c7fa86a71a9fd8b3d9b0c97f9c8c3005e5c0430c3c7a8d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:55:58.388601 kubelet[2789]: E1216 12:55:58.386671 2789 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e3a5554c0fc93912c7fa86a71a9fd8b3d9b0c97f9c8c3005e5c0430c3c7a8d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b46df5bf6-8vt25" Dec 16 12:55:58.388601 kubelet[2789]: E1216 12:55:58.386696 2789 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e3a5554c0fc93912c7fa86a71a9fd8b3d9b0c97f9c8c3005e5c0430c3c7a8d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b46df5bf6-8vt25" Dec 16 12:55:58.388955 kubelet[2789]: E1216 12:55:58.388884 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7b46df5bf6-8vt25_calico-system(eceb5ead-85dc-4ae8-98b5-b55994dab5ce)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7b46df5bf6-8vt25_calico-system(eceb5ead-85dc-4ae8-98b5-b55994dab5ce)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1e3a5554c0fc93912c7fa86a71a9fd8b3d9b0c97f9c8c3005e5c0430c3c7a8d7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7b46df5bf6-8vt25" podUID="eceb5ead-85dc-4ae8-98b5-b55994dab5ce" Dec 16 12:55:58.397648 containerd[1617]: time="2025-12-16T12:55:58.397584218Z" level=error msg="Failed to destroy network for sandbox \"365cfe9d4aa4dafdd27670a6b824094a7cd95d500c92fd5fc19c66a4112ccb79\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:55:58.400113 containerd[1617]: time="2025-12-16T12:55:58.400047078Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-9b78f4b84-5zpvr,Uid:575a9ccb-ab86-412f-ad4a-68e90d59046f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"365cfe9d4aa4dafdd27670a6b824094a7cd95d500c92fd5fc19c66a4112ccb79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:55:58.401474 kubelet[2789]: E1216 12:55:58.401320 2789 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"365cfe9d4aa4dafdd27670a6b824094a7cd95d500c92fd5fc19c66a4112ccb79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:55:58.401616 kubelet[2789]: E1216 12:55:58.401533 2789 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"365cfe9d4aa4dafdd27670a6b824094a7cd95d500c92fd5fc19c66a4112ccb79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-9b78f4b84-5zpvr" Dec 16 12:55:58.401616 kubelet[2789]: E1216 12:55:58.401571 2789 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"365cfe9d4aa4dafdd27670a6b824094a7cd95d500c92fd5fc19c66a4112ccb79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-9b78f4b84-5zpvr" Dec 16 12:55:58.401829 kubelet[2789]: E1216 12:55:58.401690 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-9b78f4b84-5zpvr_calico-system(575a9ccb-ab86-412f-ad4a-68e90d59046f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-9b78f4b84-5zpvr_calico-system(575a9ccb-ab86-412f-ad4a-68e90d59046f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"365cfe9d4aa4dafdd27670a6b824094a7cd95d500c92fd5fc19c66a4112ccb79\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-9b78f4b84-5zpvr" podUID="575a9ccb-ab86-412f-ad4a-68e90d59046f" Dec 16 12:55:58.407169 containerd[1617]: time="2025-12-16T12:55:58.407017124Z" level=error msg="Failed to destroy network for sandbox \"3b95c7f26f3ca1195f7eabe4e31d6b55af654fce6801515ae083c6210291391a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:55:58.408719 containerd[1617]: time="2025-12-16T12:55:58.408606556Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jvzjz,Uid:564acc2b-9d61-41a8-ac66-0231a2d37863,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b95c7f26f3ca1195f7eabe4e31d6b55af654fce6801515ae083c6210291391a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:55:58.409217 kubelet[2789]: E1216 12:55:58.409170 2789 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b95c7f26f3ca1195f7eabe4e31d6b55af654fce6801515ae083c6210291391a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:55:58.409298 kubelet[2789]: E1216 12:55:58.409262 2789 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b95c7f26f3ca1195f7eabe4e31d6b55af654fce6801515ae083c6210291391a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-jvzjz" Dec 16 12:55:58.409354 kubelet[2789]: E1216 12:55:58.409318 2789 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b95c7f26f3ca1195f7eabe4e31d6b55af654fce6801515ae083c6210291391a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-jvzjz" Dec 16 12:55:58.409560 kubelet[2789]: E1216 12:55:58.409423 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-jvzjz_kube-system(564acc2b-9d61-41a8-ac66-0231a2d37863)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-jvzjz_kube-system(564acc2b-9d61-41a8-ac66-0231a2d37863)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3b95c7f26f3ca1195f7eabe4e31d6b55af654fce6801515ae083c6210291391a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-jvzjz" podUID="564acc2b-9d61-41a8-ac66-0231a2d37863" Dec 16 12:55:58.655553 kubelet[2789]: E1216 12:55:58.655121 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 12:55:58.659570 containerd[1617]: time="2025-12-16T12:55:58.659542804Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 16 12:55:59.424780 systemd[1]: Created slice kubepods-besteffort-pod7b89c039_0754_43bd_ad85_5506dee48dad.slice - libcontainer container kubepods-besteffort-pod7b89c039_0754_43bd_ad85_5506dee48dad.slice. Dec 16 12:55:59.427849 containerd[1617]: time="2025-12-16T12:55:59.427803720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rmjtf,Uid:7b89c039-0754-43bd-ad85-5506dee48dad,Namespace:calico-system,Attempt:0,}" Dec 16 12:55:59.497304 containerd[1617]: time="2025-12-16T12:55:59.497241670Z" level=error msg="Failed to destroy network for sandbox \"c91e4d6dd9d7e25ddfc7eebd0ce4d91a5cab29be8a62795b76ee44e097904ebd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:55:59.500199 systemd[1]: run-netns-cni\x2d4c90f96b\x2dc518\x2db63e\x2ddd68\x2d05b2e54a4d92.mount: Deactivated successfully. Dec 16 12:55:59.501145 containerd[1617]: time="2025-12-16T12:55:59.500953738Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rmjtf,Uid:7b89c039-0754-43bd-ad85-5506dee48dad,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c91e4d6dd9d7e25ddfc7eebd0ce4d91a5cab29be8a62795b76ee44e097904ebd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:55:59.502197 kubelet[2789]: E1216 12:55:59.501505 2789 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c91e4d6dd9d7e25ddfc7eebd0ce4d91a5cab29be8a62795b76ee44e097904ebd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:55:59.502197 kubelet[2789]: E1216 12:55:59.501612 2789 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c91e4d6dd9d7e25ddfc7eebd0ce4d91a5cab29be8a62795b76ee44e097904ebd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rmjtf" Dec 16 12:55:59.502197 kubelet[2789]: E1216 12:55:59.501662 2789 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c91e4d6dd9d7e25ddfc7eebd0ce4d91a5cab29be8a62795b76ee44e097904ebd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rmjtf" Dec 16 12:55:59.502319 kubelet[2789]: E1216 12:55:59.501746 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rmjtf_calico-system(7b89c039-0754-43bd-ad85-5506dee48dad)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rmjtf_calico-system(7b89c039-0754-43bd-ad85-5506dee48dad)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c91e4d6dd9d7e25ddfc7eebd0ce4d91a5cab29be8a62795b76ee44e097904ebd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rmjtf" podUID="7b89c039-0754-43bd-ad85-5506dee48dad" Dec 16 12:56:05.019519 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1669171360.mount: Deactivated successfully. Dec 16 12:56:05.083088 containerd[1617]: time="2025-12-16T12:56:05.066924861Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:56:05.085383 containerd[1617]: time="2025-12-16T12:56:05.085333702Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:56:05.089203 containerd[1617]: time="2025-12-16T12:56:05.088436070Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Dec 16 12:56:05.089203 containerd[1617]: time="2025-12-16T12:56:05.088905663Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:56:05.089503 containerd[1617]: time="2025-12-16T12:56:05.089466276Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 6.426138757s" Dec 16 12:56:05.100485 containerd[1617]: time="2025-12-16T12:56:05.100414103Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Dec 16 12:56:05.140235 containerd[1617]: time="2025-12-16T12:56:05.140190519Z" level=info msg="CreateContainer within sandbox \"4145a736864d243e1730f7ee86ef11cac83fca4dd4712618963b5abe4c7004fc\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 16 12:56:05.245901 containerd[1617]: time="2025-12-16T12:56:05.245849626Z" level=info msg="Container a81157192977d0d02151f8d455f41b01c9a9cf2377ed0c4fb9984a29cc5cee43: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:56:05.250397 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2318652328.mount: Deactivated successfully. Dec 16 12:56:05.311277 containerd[1617]: time="2025-12-16T12:56:05.310865713Z" level=info msg="CreateContainer within sandbox \"4145a736864d243e1730f7ee86ef11cac83fca4dd4712618963b5abe4c7004fc\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a81157192977d0d02151f8d455f41b01c9a9cf2377ed0c4fb9984a29cc5cee43\"" Dec 16 12:56:05.312188 containerd[1617]: time="2025-12-16T12:56:05.312086141Z" level=info msg="StartContainer for \"a81157192977d0d02151f8d455f41b01c9a9cf2377ed0c4fb9984a29cc5cee43\"" Dec 16 12:56:05.316958 containerd[1617]: time="2025-12-16T12:56:05.316886526Z" level=info msg="connecting to shim a81157192977d0d02151f8d455f41b01c9a9cf2377ed0c4fb9984a29cc5cee43" address="unix:///run/containerd/s/67e1fbe24b18ee8282af5281b291451a0b5bd89bb33d127ba94afc43d4ed0975" protocol=ttrpc version=3 Dec 16 12:56:05.410210 systemd[1]: Started cri-containerd-a81157192977d0d02151f8d455f41b01c9a9cf2377ed0c4fb9984a29cc5cee43.scope - libcontainer container a81157192977d0d02151f8d455f41b01c9a9cf2377ed0c4fb9984a29cc5cee43. Dec 16 12:56:05.481000 audit: BPF prog-id=175 op=LOAD Dec 16 12:56:05.490799 kernel: kauditd_printk_skb: 6 callbacks suppressed Dec 16 12:56:05.493417 kernel: audit: type=1334 audit(1765889765.481:580): prog-id=175 op=LOAD Dec 16 12:56:05.493498 kernel: audit: type=1300 audit(1765889765.481:580): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3387 pid=3849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:05.493522 kernel: audit: type=1327 audit(1765889765.481:580): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138313135373139323937376430643032313531663864343535663431 Dec 16 12:56:05.481000 audit[3849]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3387 pid=3849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:05.481000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138313135373139323937376430643032313531663864343535663431 Dec 16 12:56:05.485000 audit: BPF prog-id=176 op=LOAD Dec 16 12:56:05.485000 audit[3849]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3387 pid=3849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:05.505152 kernel: audit: type=1334 audit(1765889765.485:581): prog-id=176 op=LOAD Dec 16 12:56:05.505310 kernel: audit: type=1300 audit(1765889765.485:581): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3387 pid=3849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:05.485000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138313135373139323937376430643032313531663864343535663431 Dec 16 12:56:05.485000 audit: BPF prog-id=176 op=UNLOAD Dec 16 12:56:05.517993 kernel: audit: type=1327 audit(1765889765.485:581): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138313135373139323937376430643032313531663864343535663431 Dec 16 12:56:05.518102 kernel: audit: type=1334 audit(1765889765.485:582): prog-id=176 op=UNLOAD Dec 16 12:56:05.485000 audit[3849]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3387 pid=3849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:05.522092 kernel: audit: type=1300 audit(1765889765.485:582): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3387 pid=3849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:05.485000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138313135373139323937376430643032313531663864343535663431 Dec 16 12:56:05.527912 kernel: audit: type=1327 audit(1765889765.485:582): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138313135373139323937376430643032313531663864343535663431 Dec 16 12:56:05.485000 audit: BPF prog-id=175 op=UNLOAD Dec 16 12:56:05.485000 audit[3849]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3387 pid=3849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:05.485000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138313135373139323937376430643032313531663864343535663431 Dec 16 12:56:05.485000 audit: BPF prog-id=177 op=LOAD Dec 16 12:56:05.485000 audit[3849]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3387 pid=3849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:05.536127 kernel: audit: type=1334 audit(1765889765.485:583): prog-id=175 op=UNLOAD Dec 16 12:56:05.485000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138313135373139323937376430643032313531663864343535663431 Dec 16 12:56:05.545986 containerd[1617]: time="2025-12-16T12:56:05.545947807Z" level=info msg="StartContainer for \"a81157192977d0d02151f8d455f41b01c9a9cf2377ed0c4fb9984a29cc5cee43\" returns successfully" Dec 16 12:56:05.687083 kubelet[2789]: E1216 12:56:05.686914 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 12:56:05.695525 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 16 12:56:05.697296 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 16 12:56:05.729656 kubelet[2789]: I1216 12:56:05.726749 2789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-s2m79" podStartSLOduration=1.9110078179999999 podStartE2EDuration="18.726720839s" podCreationTimestamp="2025-12-16 12:55:47 +0000 UTC" firstStartedPulling="2025-12-16 12:55:48.28597285 +0000 UTC m=+22.111442245" lastFinishedPulling="2025-12-16 12:56:05.101685873 +0000 UTC m=+38.927155266" observedRunningTime="2025-12-16 12:56:05.723868842 +0000 UTC m=+39.549338256" watchObservedRunningTime="2025-12-16 12:56:05.726720839 +0000 UTC m=+39.552190249" Dec 16 12:56:06.007679 kubelet[2789]: I1216 12:56:06.007308 2789 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frvjm\" (UniqueName: \"kubernetes.io/projected/575a9ccb-ab86-412f-ad4a-68e90d59046f-kube-api-access-frvjm\") pod \"575a9ccb-ab86-412f-ad4a-68e90d59046f\" (UID: \"575a9ccb-ab86-412f-ad4a-68e90d59046f\") " Dec 16 12:56:06.007679 kubelet[2789]: I1216 12:56:06.007361 2789 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/575a9ccb-ab86-412f-ad4a-68e90d59046f-whisker-backend-key-pair\") pod \"575a9ccb-ab86-412f-ad4a-68e90d59046f\" (UID: \"575a9ccb-ab86-412f-ad4a-68e90d59046f\") " Dec 16 12:56:06.007679 kubelet[2789]: I1216 12:56:06.007401 2789 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/575a9ccb-ab86-412f-ad4a-68e90d59046f-whisker-ca-bundle\") pod \"575a9ccb-ab86-412f-ad4a-68e90d59046f\" (UID: \"575a9ccb-ab86-412f-ad4a-68e90d59046f\") " Dec 16 12:56:06.051508 systemd[1]: var-lib-kubelet-pods-575a9ccb\x2dab86\x2d412f\x2dad4a\x2d68e90d59046f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfrvjm.mount: Deactivated successfully. Dec 16 12:56:06.063964 kubelet[2789]: I1216 12:56:06.063901 2789 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/575a9ccb-ab86-412f-ad4a-68e90d59046f-kube-api-access-frvjm" (OuterVolumeSpecName: "kube-api-access-frvjm") pod "575a9ccb-ab86-412f-ad4a-68e90d59046f" (UID: "575a9ccb-ab86-412f-ad4a-68e90d59046f"). InnerVolumeSpecName "kube-api-access-frvjm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 12:56:06.064769 kubelet[2789]: I1216 12:56:06.064191 2789 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/575a9ccb-ab86-412f-ad4a-68e90d59046f-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "575a9ccb-ab86-412f-ad4a-68e90d59046f" (UID: "575a9ccb-ab86-412f-ad4a-68e90d59046f"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 12:56:06.067683 kubelet[2789]: I1216 12:56:06.065790 2789 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/575a9ccb-ab86-412f-ad4a-68e90d59046f-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "575a9ccb-ab86-412f-ad4a-68e90d59046f" (UID: "575a9ccb-ab86-412f-ad4a-68e90d59046f"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 16 12:56:06.069551 systemd[1]: var-lib-kubelet-pods-575a9ccb\x2dab86\x2d412f\x2dad4a\x2d68e90d59046f-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 16 12:56:06.108272 kubelet[2789]: I1216 12:56:06.108209 2789 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-frvjm\" (UniqueName: \"kubernetes.io/projected/575a9ccb-ab86-412f-ad4a-68e90d59046f-kube-api-access-frvjm\") on node \"ci-4515.1.0-3-ef2be4b8ba\" DevicePath \"\"" Dec 16 12:56:06.108272 kubelet[2789]: I1216 12:56:06.108264 2789 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/575a9ccb-ab86-412f-ad4a-68e90d59046f-whisker-backend-key-pair\") on node \"ci-4515.1.0-3-ef2be4b8ba\" DevicePath \"\"" Dec 16 12:56:06.108885 kubelet[2789]: I1216 12:56:06.108289 2789 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/575a9ccb-ab86-412f-ad4a-68e90d59046f-whisker-ca-bundle\") on node \"ci-4515.1.0-3-ef2be4b8ba\" DevicePath \"\"" Dec 16 12:56:06.431531 systemd[1]: Removed slice kubepods-besteffort-pod575a9ccb_ab86_412f_ad4a_68e90d59046f.slice - libcontainer container kubepods-besteffort-pod575a9ccb_ab86_412f_ad4a_68e90d59046f.slice. Dec 16 12:56:06.687797 kubelet[2789]: I1216 12:56:06.685719 2789 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 12:56:06.688889 kubelet[2789]: E1216 12:56:06.688826 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 12:56:06.807817 systemd[1]: Created slice kubepods-besteffort-poddd978ccd_4987_4640_957a_11962c9801ea.slice - libcontainer container kubepods-besteffort-poddd978ccd_4987_4640_957a_11962c9801ea.slice. Dec 16 12:56:06.915468 kubelet[2789]: I1216 12:56:06.915396 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/dd978ccd-4987-4640-957a-11962c9801ea-whisker-backend-key-pair\") pod \"whisker-675645944b-fkqz9\" (UID: \"dd978ccd-4987-4640-957a-11962c9801ea\") " pod="calico-system/whisker-675645944b-fkqz9" Dec 16 12:56:06.915665 kubelet[2789]: I1216 12:56:06.915602 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqmlr\" (UniqueName: \"kubernetes.io/projected/dd978ccd-4987-4640-957a-11962c9801ea-kube-api-access-hqmlr\") pod \"whisker-675645944b-fkqz9\" (UID: \"dd978ccd-4987-4640-957a-11962c9801ea\") " pod="calico-system/whisker-675645944b-fkqz9" Dec 16 12:56:06.915719 kubelet[2789]: I1216 12:56:06.915686 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd978ccd-4987-4640-957a-11962c9801ea-whisker-ca-bundle\") pod \"whisker-675645944b-fkqz9\" (UID: \"dd978ccd-4987-4640-957a-11962c9801ea\") " pod="calico-system/whisker-675645944b-fkqz9" Dec 16 12:56:07.113334 containerd[1617]: time="2025-12-16T12:56:07.113227104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-675645944b-fkqz9,Uid:dd978ccd-4987-4640-957a-11962c9801ea,Namespace:calico-system,Attempt:0,}" Dec 16 12:56:07.549278 systemd-networkd[1510]: calie4e88637b72: Link UP Dec 16 12:56:07.552910 systemd-networkd[1510]: calie4e88637b72: Gained carrier Dec 16 12:56:07.602188 containerd[1617]: 2025-12-16 12:56:07.155 [INFO][3913] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 12:56:07.602188 containerd[1617]: 2025-12-16 12:56:07.194 [INFO][3913] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--3--ef2be4b8ba-k8s-whisker--675645944b--fkqz9-eth0 whisker-675645944b- calico-system dd978ccd-4987-4640-957a-11962c9801ea 982 0 2025-12-16 12:56:06 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:675645944b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4515.1.0-3-ef2be4b8ba whisker-675645944b-fkqz9 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calie4e88637b72 [] [] }} ContainerID="ea21f54e50f98f71c3968b98522990c3fdf5257a97704a91f57213cdd4530d01" Namespace="calico-system" Pod="whisker-675645944b-fkqz9" WorkloadEndpoint="ci--4515.1.0--3--ef2be4b8ba-k8s-whisker--675645944b--fkqz9-" Dec 16 12:56:07.602188 containerd[1617]: 2025-12-16 12:56:07.194 [INFO][3913] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ea21f54e50f98f71c3968b98522990c3fdf5257a97704a91f57213cdd4530d01" Namespace="calico-system" Pod="whisker-675645944b-fkqz9" WorkloadEndpoint="ci--4515.1.0--3--ef2be4b8ba-k8s-whisker--675645944b--fkqz9-eth0" Dec 16 12:56:07.602188 containerd[1617]: 2025-12-16 12:56:07.451 [INFO][3924] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ea21f54e50f98f71c3968b98522990c3fdf5257a97704a91f57213cdd4530d01" HandleID="k8s-pod-network.ea21f54e50f98f71c3968b98522990c3fdf5257a97704a91f57213cdd4530d01" Workload="ci--4515.1.0--3--ef2be4b8ba-k8s-whisker--675645944b--fkqz9-eth0" Dec 16 12:56:07.603222 containerd[1617]: 2025-12-16 12:56:07.454 [INFO][3924] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ea21f54e50f98f71c3968b98522990c3fdf5257a97704a91f57213cdd4530d01" HandleID="k8s-pod-network.ea21f54e50f98f71c3968b98522990c3fdf5257a97704a91f57213cdd4530d01" Workload="ci--4515.1.0--3--ef2be4b8ba-k8s-whisker--675645944b--fkqz9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e620), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515.1.0-3-ef2be4b8ba", "pod":"whisker-675645944b-fkqz9", "timestamp":"2025-12-16 12:56:07.451163428 +0000 UTC"}, Hostname:"ci-4515.1.0-3-ef2be4b8ba", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:56:07.603222 containerd[1617]: 2025-12-16 12:56:07.454 [INFO][3924] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:56:07.603222 containerd[1617]: 2025-12-16 12:56:07.457 [INFO][3924] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:56:07.603222 containerd[1617]: 2025-12-16 12:56:07.458 [INFO][3924] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-3-ef2be4b8ba' Dec 16 12:56:07.603222 containerd[1617]: 2025-12-16 12:56:07.472 [INFO][3924] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ea21f54e50f98f71c3968b98522990c3fdf5257a97704a91f57213cdd4530d01" host="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:56:07.603222 containerd[1617]: 2025-12-16 12:56:07.485 [INFO][3924] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:56:07.603222 containerd[1617]: 2025-12-16 12:56:07.494 [INFO][3924] ipam/ipam.go 511: Trying affinity for 192.168.79.128/26 host="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:56:07.603222 containerd[1617]: 2025-12-16 12:56:07.497 [INFO][3924] ipam/ipam.go 158: Attempting to load block cidr=192.168.79.128/26 host="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:56:07.603222 containerd[1617]: 2025-12-16 12:56:07.500 [INFO][3924] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.79.128/26 host="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:56:07.603573 containerd[1617]: 2025-12-16 12:56:07.501 [INFO][3924] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.79.128/26 handle="k8s-pod-network.ea21f54e50f98f71c3968b98522990c3fdf5257a97704a91f57213cdd4530d01" host="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:56:07.603573 containerd[1617]: 2025-12-16 12:56:07.503 [INFO][3924] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ea21f54e50f98f71c3968b98522990c3fdf5257a97704a91f57213cdd4530d01 Dec 16 12:56:07.603573 containerd[1617]: 2025-12-16 12:56:07.509 [INFO][3924] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.79.128/26 handle="k8s-pod-network.ea21f54e50f98f71c3968b98522990c3fdf5257a97704a91f57213cdd4530d01" host="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:56:07.603573 containerd[1617]: 2025-12-16 12:56:07.517 [INFO][3924] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.79.129/26] block=192.168.79.128/26 handle="k8s-pod-network.ea21f54e50f98f71c3968b98522990c3fdf5257a97704a91f57213cdd4530d01" host="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:56:07.603573 containerd[1617]: 2025-12-16 12:56:07.518 [INFO][3924] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.79.129/26] handle="k8s-pod-network.ea21f54e50f98f71c3968b98522990c3fdf5257a97704a91f57213cdd4530d01" host="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:56:07.603573 containerd[1617]: 2025-12-16 12:56:07.518 [INFO][3924] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:56:07.603573 containerd[1617]: 2025-12-16 12:56:07.518 [INFO][3924] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.79.129/26] IPv6=[] ContainerID="ea21f54e50f98f71c3968b98522990c3fdf5257a97704a91f57213cdd4530d01" HandleID="k8s-pod-network.ea21f54e50f98f71c3968b98522990c3fdf5257a97704a91f57213cdd4530d01" Workload="ci--4515.1.0--3--ef2be4b8ba-k8s-whisker--675645944b--fkqz9-eth0" Dec 16 12:56:07.605890 containerd[1617]: 2025-12-16 12:56:07.524 [INFO][3913] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ea21f54e50f98f71c3968b98522990c3fdf5257a97704a91f57213cdd4530d01" Namespace="calico-system" Pod="whisker-675645944b-fkqz9" WorkloadEndpoint="ci--4515.1.0--3--ef2be4b8ba-k8s-whisker--675645944b--fkqz9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--3--ef2be4b8ba-k8s-whisker--675645944b--fkqz9-eth0", GenerateName:"whisker-675645944b-", Namespace:"calico-system", SelfLink:"", UID:"dd978ccd-4987-4640-957a-11962c9801ea", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 56, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"675645944b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-3-ef2be4b8ba", ContainerID:"", Pod:"whisker-675645944b-fkqz9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.79.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie4e88637b72", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:56:07.605890 containerd[1617]: 2025-12-16 12:56:07.524 [INFO][3913] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.79.129/32] ContainerID="ea21f54e50f98f71c3968b98522990c3fdf5257a97704a91f57213cdd4530d01" Namespace="calico-system" Pod="whisker-675645944b-fkqz9" WorkloadEndpoint="ci--4515.1.0--3--ef2be4b8ba-k8s-whisker--675645944b--fkqz9-eth0" Dec 16 12:56:07.606352 containerd[1617]: 2025-12-16 12:56:07.524 [INFO][3913] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie4e88637b72 ContainerID="ea21f54e50f98f71c3968b98522990c3fdf5257a97704a91f57213cdd4530d01" Namespace="calico-system" Pod="whisker-675645944b-fkqz9" WorkloadEndpoint="ci--4515.1.0--3--ef2be4b8ba-k8s-whisker--675645944b--fkqz9-eth0" Dec 16 12:56:07.606352 containerd[1617]: 2025-12-16 12:56:07.563 [INFO][3913] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ea21f54e50f98f71c3968b98522990c3fdf5257a97704a91f57213cdd4530d01" Namespace="calico-system" Pod="whisker-675645944b-fkqz9" WorkloadEndpoint="ci--4515.1.0--3--ef2be4b8ba-k8s-whisker--675645944b--fkqz9-eth0" Dec 16 12:56:07.606441 containerd[1617]: 2025-12-16 12:56:07.568 [INFO][3913] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ea21f54e50f98f71c3968b98522990c3fdf5257a97704a91f57213cdd4530d01" Namespace="calico-system" Pod="whisker-675645944b-fkqz9" WorkloadEndpoint="ci--4515.1.0--3--ef2be4b8ba-k8s-whisker--675645944b--fkqz9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--3--ef2be4b8ba-k8s-whisker--675645944b--fkqz9-eth0", GenerateName:"whisker-675645944b-", Namespace:"calico-system", SelfLink:"", UID:"dd978ccd-4987-4640-957a-11962c9801ea", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 56, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"675645944b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-3-ef2be4b8ba", ContainerID:"ea21f54e50f98f71c3968b98522990c3fdf5257a97704a91f57213cdd4530d01", Pod:"whisker-675645944b-fkqz9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.79.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie4e88637b72", MAC:"fa:1c:e2:f9:f5:72", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:56:07.606510 containerd[1617]: 2025-12-16 12:56:07.596 [INFO][3913] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ea21f54e50f98f71c3968b98522990c3fdf5257a97704a91f57213cdd4530d01" Namespace="calico-system" Pod="whisker-675645944b-fkqz9" WorkloadEndpoint="ci--4515.1.0--3--ef2be4b8ba-k8s-whisker--675645944b--fkqz9-eth0" Dec 16 12:56:07.822735 containerd[1617]: time="2025-12-16T12:56:07.821749514Z" level=info msg="connecting to shim ea21f54e50f98f71c3968b98522990c3fdf5257a97704a91f57213cdd4530d01" address="unix:///run/containerd/s/90750963da8515805cd942a8cbbe44b4e6464a15af3a15394e2ecb98ad7bf50c" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:56:07.890937 systemd[1]: Started cri-containerd-ea21f54e50f98f71c3968b98522990c3fdf5257a97704a91f57213cdd4530d01.scope - libcontainer container ea21f54e50f98f71c3968b98522990c3fdf5257a97704a91f57213cdd4530d01. Dec 16 12:56:07.935000 audit: BPF prog-id=178 op=LOAD Dec 16 12:56:07.938000 audit: BPF prog-id=179 op=LOAD Dec 16 12:56:07.938000 audit[4052]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=4039 pid=4052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:07.938000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561323166353465353066393866373163333936386239383532323939 Dec 16 12:56:07.938000 audit: BPF prog-id=179 op=UNLOAD Dec 16 12:56:07.938000 audit[4052]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4039 pid=4052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:07.938000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561323166353465353066393866373163333936386239383532323939 Dec 16 12:56:07.939000 audit: BPF prog-id=180 op=LOAD Dec 16 12:56:07.939000 audit[4052]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=4039 pid=4052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:07.939000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561323166353465353066393866373163333936386239383532323939 Dec 16 12:56:07.939000 audit: BPF prog-id=181 op=LOAD Dec 16 12:56:07.939000 audit[4052]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=4039 pid=4052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:07.939000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561323166353465353066393866373163333936386239383532323939 Dec 16 12:56:07.939000 audit: BPF prog-id=181 op=UNLOAD Dec 16 12:56:07.939000 audit[4052]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4039 pid=4052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:07.939000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561323166353465353066393866373163333936386239383532323939 Dec 16 12:56:07.939000 audit: BPF prog-id=180 op=UNLOAD Dec 16 12:56:07.939000 audit[4052]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4039 pid=4052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:07.939000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561323166353465353066393866373163333936386239383532323939 Dec 16 12:56:07.939000 audit: BPF prog-id=182 op=LOAD Dec 16 12:56:07.939000 audit[4052]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=4039 pid=4052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:07.939000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561323166353465353066393866373163333936386239383532323939 Dec 16 12:56:07.946000 audit: BPF prog-id=183 op=LOAD Dec 16 12:56:07.946000 audit[4079]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe1c0dadb0 a2=98 a3=1fffffffffffffff items=0 ppid=3943 pid=4079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:07.946000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:56:07.948000 audit: BPF prog-id=183 op=UNLOAD Dec 16 12:56:07.948000 audit[4079]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe1c0dad80 a3=0 items=0 ppid=3943 pid=4079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:07.948000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:56:07.949000 audit: BPF prog-id=184 op=LOAD Dec 16 12:56:07.949000 audit[4079]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe1c0dac90 a2=94 a3=3 items=0 ppid=3943 pid=4079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:07.949000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:56:07.950000 audit: BPF prog-id=184 op=UNLOAD Dec 16 12:56:07.950000 audit[4079]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe1c0dac90 a2=94 a3=3 items=0 ppid=3943 pid=4079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:07.950000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:56:07.951000 audit: BPF prog-id=185 op=LOAD Dec 16 12:56:07.951000 audit[4079]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe1c0dacd0 a2=94 a3=7ffe1c0daeb0 items=0 ppid=3943 pid=4079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:07.951000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:56:07.951000 audit: BPF prog-id=185 op=UNLOAD Dec 16 12:56:07.951000 audit[4079]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe1c0dacd0 a2=94 a3=7ffe1c0daeb0 items=0 ppid=3943 pid=4079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:07.951000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:56:07.991000 audit: BPF prog-id=186 op=LOAD Dec 16 12:56:07.991000 audit[4082]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd2b9d3590 a2=98 a3=3 items=0 ppid=3943 pid=4082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:07.991000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:56:08.004000 audit: BPF prog-id=186 op=UNLOAD Dec 16 12:56:08.004000 audit[4082]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd2b9d3560 a3=0 items=0 ppid=3943 pid=4082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:08.004000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:56:08.011000 audit: BPF prog-id=187 op=LOAD Dec 16 12:56:08.011000 audit[4082]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd2b9d3380 a2=94 a3=54428f items=0 ppid=3943 pid=4082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:08.011000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:56:08.012000 audit: BPF prog-id=187 op=UNLOAD Dec 16 12:56:08.012000 audit[4082]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd2b9d3380 a2=94 a3=54428f items=0 ppid=3943 pid=4082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:08.012000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:56:08.012000 audit: BPF prog-id=188 op=LOAD Dec 16 12:56:08.012000 audit[4082]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd2b9d33b0 a2=94 a3=2 items=0 ppid=3943 pid=4082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:08.012000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:56:08.012000 audit: BPF prog-id=188 op=UNLOAD Dec 16 12:56:08.012000 audit[4082]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd2b9d33b0 a2=0 a3=2 items=0 ppid=3943 pid=4082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:08.012000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:56:08.091427 containerd[1617]: time="2025-12-16T12:56:08.090835921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-675645944b-fkqz9,Uid:dd978ccd-4987-4640-957a-11962c9801ea,Namespace:calico-system,Attempt:0,} returns sandbox id \"ea21f54e50f98f71c3968b98522990c3fdf5257a97704a91f57213cdd4530d01\"" Dec 16 12:56:08.114249 containerd[1617]: time="2025-12-16T12:56:08.113788006Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:56:08.314000 audit: BPF prog-id=189 op=LOAD Dec 16 12:56:08.314000 audit[4082]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd2b9d3270 a2=94 a3=1 items=0 ppid=3943 pid=4082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:08.314000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:56:08.314000 audit: BPF prog-id=189 op=UNLOAD Dec 16 12:56:08.314000 audit[4082]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd2b9d3270 a2=94 a3=1 items=0 ppid=3943 pid=4082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:08.314000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:56:08.328000 audit: BPF prog-id=190 op=LOAD Dec 16 12:56:08.328000 audit[4082]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd2b9d3260 a2=94 a3=4 items=0 ppid=3943 pid=4082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:08.328000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:56:08.328000 audit: BPF prog-id=190 op=UNLOAD Dec 16 12:56:08.328000 audit[4082]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffd2b9d3260 a2=0 a3=4 items=0 ppid=3943 pid=4082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:08.328000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:56:08.328000 audit: BPF prog-id=191 op=LOAD Dec 16 12:56:08.328000 audit[4082]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd2b9d30c0 a2=94 a3=5 items=0 ppid=3943 pid=4082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:08.328000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:56:08.328000 audit: BPF prog-id=191 op=UNLOAD Dec 16 12:56:08.328000 audit[4082]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd2b9d30c0 a2=0 a3=5 items=0 ppid=3943 pid=4082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:08.328000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:56:08.329000 audit: BPF prog-id=192 op=LOAD Dec 16 12:56:08.329000 audit[4082]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd2b9d32e0 a2=94 a3=6 items=0 ppid=3943 pid=4082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:08.329000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:56:08.330000 audit: BPF prog-id=192 op=UNLOAD Dec 16 12:56:08.330000 audit[4082]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffd2b9d32e0 a2=0 a3=6 items=0 ppid=3943 pid=4082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:08.330000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:56:08.330000 audit: BPF prog-id=193 op=LOAD Dec 16 12:56:08.330000 audit[4082]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd2b9d2a90 a2=94 a3=88 items=0 ppid=3943 pid=4082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:08.330000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:56:08.331000 audit: BPF prog-id=194 op=LOAD Dec 16 12:56:08.331000 audit[4082]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffd2b9d2910 a2=94 a3=2 items=0 ppid=3943 pid=4082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:08.331000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:56:08.332000 audit: BPF prog-id=194 op=UNLOAD Dec 16 12:56:08.332000 audit[4082]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffd2b9d2940 a2=0 a3=7ffd2b9d2a40 items=0 ppid=3943 pid=4082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:08.332000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:56:08.332000 audit: BPF prog-id=193 op=UNLOAD Dec 16 12:56:08.332000 audit[4082]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=3e156d10 a2=0 a3=1b7895fe0bbadf7 items=0 ppid=3943 pid=4082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:08.332000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:56:08.352000 audit: BPF prog-id=195 op=LOAD Dec 16 12:56:08.352000 audit[4111]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc04ddc780 a2=98 a3=1999999999999999 items=0 ppid=3943 pid=4111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:08.352000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:56:08.354000 audit: BPF prog-id=195 op=UNLOAD Dec 16 12:56:08.354000 audit[4111]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffc04ddc750 a3=0 items=0 ppid=3943 pid=4111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:08.354000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:56:08.354000 audit: BPF prog-id=196 op=LOAD Dec 16 12:56:08.354000 audit[4111]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc04ddc660 a2=94 a3=ffff items=0 ppid=3943 pid=4111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:08.354000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:56:08.354000 audit: BPF prog-id=196 op=UNLOAD Dec 16 12:56:08.354000 audit[4111]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc04ddc660 a2=94 a3=ffff items=0 ppid=3943 pid=4111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:08.354000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:56:08.354000 audit: BPF prog-id=197 op=LOAD Dec 16 12:56:08.354000 audit[4111]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc04ddc6a0 a2=94 a3=7ffc04ddc880 items=0 ppid=3943 pid=4111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:08.354000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:56:08.354000 audit: BPF prog-id=197 op=UNLOAD Dec 16 12:56:08.354000 audit[4111]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc04ddc6a0 a2=94 a3=7ffc04ddc880 items=0 ppid=3943 pid=4111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:08.354000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:56:08.425515 kubelet[2789]: I1216 12:56:08.425458 2789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="575a9ccb-ab86-412f-ad4a-68e90d59046f" path="/var/lib/kubelet/pods/575a9ccb-ab86-412f-ad4a-68e90d59046f/volumes" Dec 16 12:56:08.427387 containerd[1617]: time="2025-12-16T12:56:08.427300464Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:56:08.432926 containerd[1617]: time="2025-12-16T12:56:08.432686885Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:56:08.432926 containerd[1617]: time="2025-12-16T12:56:08.432807035Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 12:56:08.433245 kubelet[2789]: E1216 12:56:08.433167 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:56:08.437174 kubelet[2789]: E1216 12:56:08.435982 2789 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:56:08.440071 systemd-networkd[1510]: vxlan.calico: Link UP Dec 16 12:56:08.440081 systemd-networkd[1510]: vxlan.calico: Gained carrier Dec 16 12:56:08.475308 kubelet[2789]: E1216 12:56:08.475207 2789 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:b2c13645dac8477e847d7b1cce258193,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hqmlr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-675645944b-fkqz9_calico-system(dd978ccd-4987-4640-957a-11962c9801ea): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:56:08.479813 containerd[1617]: time="2025-12-16T12:56:08.479737617Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:56:08.501000 audit: BPF prog-id=198 op=LOAD Dec 16 12:56:08.501000 audit[4137]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe67fefac0 a2=98 a3=0 items=0 ppid=3943 pid=4137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:08.501000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:56:08.501000 audit: BPF prog-id=198 op=UNLOAD Dec 16 12:56:08.501000 audit[4137]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe67fefa90 a3=0 items=0 ppid=3943 pid=4137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:08.501000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:56:08.503000 audit: BPF prog-id=199 op=LOAD Dec 16 12:56:08.503000 audit[4137]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe67fef8d0 a2=94 a3=54428f items=0 ppid=3943 pid=4137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:08.503000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:56:08.503000 audit: BPF prog-id=199 op=UNLOAD Dec 16 12:56:08.503000 audit[4137]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe67fef8d0 a2=94 a3=54428f items=0 ppid=3943 pid=4137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:08.503000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:56:08.503000 audit: BPF prog-id=200 op=LOAD Dec 16 12:56:08.503000 audit[4137]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe67fef900 a2=94 a3=2 items=0 ppid=3943 pid=4137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:08.503000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:56:08.503000 audit: BPF prog-id=200 op=UNLOAD Dec 16 12:56:08.503000 audit[4137]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe67fef900 a2=0 a3=2 items=0 ppid=3943 pid=4137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:08.503000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:56:08.503000 audit: BPF prog-id=201 op=LOAD Dec 16 12:56:08.503000 audit[4137]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe67fef6b0 a2=94 a3=4 items=0 ppid=3943 pid=4137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:08.503000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:56:08.503000 audit: BPF prog-id=201 op=UNLOAD Dec 16 12:56:08.503000 audit[4137]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffe67fef6b0 a2=94 a3=4 items=0 ppid=3943 pid=4137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:08.503000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:56:08.503000 audit: BPF prog-id=202 op=LOAD Dec 16 12:56:08.503000 audit[4137]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe67fef7b0 a2=94 a3=7ffe67fef930 items=0 ppid=3943 pid=4137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:08.503000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:56:08.503000 audit: BPF prog-id=202 op=UNLOAD Dec 16 12:56:08.503000 audit[4137]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffe67fef7b0 a2=0 a3=7ffe67fef930 items=0 ppid=3943 pid=4137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:08.503000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:56:08.504000 audit: BPF prog-id=203 op=LOAD Dec 16 12:56:08.504000 audit[4137]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe67feeee0 a2=94 a3=2 items=0 ppid=3943 pid=4137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:08.504000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:56:08.504000 audit: BPF prog-id=203 op=UNLOAD Dec 16 12:56:08.504000 audit[4137]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffe67feeee0 a2=0 a3=2 items=0 ppid=3943 pid=4137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:08.504000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:56:08.504000 audit: BPF prog-id=204 op=LOAD Dec 16 12:56:08.504000 audit[4137]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe67feefe0 a2=94 a3=30 items=0 ppid=3943 pid=4137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:08.504000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:56:08.516000 audit: BPF prog-id=205 op=LOAD Dec 16 12:56:08.516000 audit[4141]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffef11ca320 a2=98 a3=0 items=0 ppid=3943 pid=4141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:08.516000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:56:08.516000 audit: BPF prog-id=205 op=UNLOAD Dec 16 12:56:08.516000 audit[4141]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffef11ca2f0 a3=0 items=0 ppid=3943 pid=4141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:08.516000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:56:08.516000 audit: BPF prog-id=206 op=LOAD Dec 16 12:56:08.516000 audit[4141]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffef11ca110 a2=94 a3=54428f items=0 ppid=3943 pid=4141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:08.516000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:56:08.516000 audit: BPF prog-id=206 op=UNLOAD Dec 16 12:56:08.516000 audit[4141]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffef11ca110 a2=94 a3=54428f items=0 ppid=3943 pid=4141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:08.516000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:56:08.516000 audit: BPF prog-id=207 op=LOAD Dec 16 12:56:08.516000 audit[4141]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffef11ca140 a2=94 a3=2 items=0 ppid=3943 pid=4141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:08.516000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:56:08.516000 audit: BPF prog-id=207 op=UNLOAD Dec 16 12:56:08.516000 audit[4141]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffef11ca140 a2=0 a3=2 items=0 ppid=3943 pid=4141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:08.516000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:56:08.718000 audit: BPF prog-id=208 op=LOAD Dec 16 12:56:08.718000 audit[4141]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffef11ca000 a2=94 a3=1 items=0 ppid=3943 pid=4141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:08.718000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:56:08.718000 audit: BPF prog-id=208 op=UNLOAD Dec 16 12:56:08.718000 audit[4141]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffef11ca000 a2=94 a3=1 items=0 ppid=3943 pid=4141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:08.718000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:56:08.731000 audit: BPF prog-id=209 op=LOAD Dec 16 12:56:08.731000 audit[4141]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffef11c9ff0 a2=94 a3=4 items=0 ppid=3943 pid=4141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:08.731000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:56:08.732000 audit: BPF prog-id=209 op=UNLOAD Dec 16 12:56:08.732000 audit[4141]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffef11c9ff0 a2=0 a3=4 items=0 ppid=3943 pid=4141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:08.732000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:56:08.732000 audit: BPF prog-id=210 op=LOAD Dec 16 12:56:08.732000 audit[4141]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffef11c9e50 a2=94 a3=5 items=0 ppid=3943 pid=4141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:08.732000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:56:08.733000 audit: BPF prog-id=210 op=UNLOAD Dec 16 12:56:08.733000 audit[4141]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffef11c9e50 a2=0 a3=5 items=0 ppid=3943 pid=4141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:08.733000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:56:08.733000 audit: BPF prog-id=211 op=LOAD Dec 16 12:56:08.733000 audit[4141]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffef11ca070 a2=94 a3=6 items=0 ppid=3943 pid=4141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:08.733000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:56:08.733000 audit: BPF prog-id=211 op=UNLOAD Dec 16 12:56:08.733000 audit[4141]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffef11ca070 a2=0 a3=6 items=0 ppid=3943 pid=4141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:08.733000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:56:08.734000 audit: BPF prog-id=212 op=LOAD Dec 16 12:56:08.734000 audit[4141]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffef11c9820 a2=94 a3=88 items=0 ppid=3943 pid=4141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:08.734000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:56:08.734000 audit: BPF prog-id=213 op=LOAD Dec 16 12:56:08.734000 audit[4141]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffef11c96a0 a2=94 a3=2 items=0 ppid=3943 pid=4141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:08.734000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:56:08.734000 audit: BPF prog-id=213 op=UNLOAD Dec 16 12:56:08.734000 audit[4141]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffef11c96d0 a2=0 a3=7ffef11c97d0 items=0 ppid=3943 pid=4141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:08.734000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:56:08.735000 audit: BPF prog-id=212 op=UNLOAD Dec 16 12:56:08.735000 audit[4141]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=4c35d10 a2=0 a3=351374b618741bb5 items=0 ppid=3943 pid=4141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:08.735000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:56:08.741000 audit: BPF prog-id=204 op=UNLOAD Dec 16 12:56:08.741000 audit[3943]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c000ccbb80 a2=0 a3=0 items=0 ppid=3930 pid=3943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:08.741000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Dec 16 12:56:08.787328 containerd[1617]: time="2025-12-16T12:56:08.787267385Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:56:08.789544 containerd[1617]: time="2025-12-16T12:56:08.789168285Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 12:56:08.789544 containerd[1617]: time="2025-12-16T12:56:08.789211415Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:56:08.790521 kubelet[2789]: E1216 12:56:08.789692 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:56:08.790521 kubelet[2789]: E1216 12:56:08.789768 2789 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:56:08.790670 kubelet[2789]: E1216 12:56:08.789955 2789 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hqmlr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-675645944b-fkqz9_calico-system(dd978ccd-4987-4640-957a-11962c9801ea): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:56:08.791331 kubelet[2789]: E1216 12:56:08.791270 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-675645944b-fkqz9" podUID="dd978ccd-4987-4640-957a-11962c9801ea" Dec 16 12:56:08.817885 systemd-networkd[1510]: calie4e88637b72: Gained IPv6LL Dec 16 12:56:08.819000 audit[4165]: NETFILTER_CFG table=nat:121 family=2 entries=15 op=nft_register_chain pid=4165 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:56:08.819000 audit[4165]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7fffa01b6280 a2=0 a3=7fffa01b626c items=0 ppid=3943 pid=4165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:08.819000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:56:08.830000 audit[4168]: NETFILTER_CFG table=mangle:122 family=2 entries=16 op=nft_register_chain pid=4168 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:56:08.830000 audit[4168]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffd1e82c7a0 a2=0 a3=7ffd1e82c78c items=0 ppid=3943 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:08.830000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:56:08.833000 audit[4166]: NETFILTER_CFG table=raw:123 family=2 entries=21 op=nft_register_chain pid=4166 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:56:08.833000 audit[4166]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffcd9136de0 a2=0 a3=7ffcd9136dcc items=0 ppid=3943 pid=4166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:08.833000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:56:08.840000 audit[4167]: NETFILTER_CFG table=filter:124 family=2 entries=94 op=nft_register_chain pid=4167 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:56:08.840000 audit[4167]: SYSCALL arch=c000003e syscall=46 success=yes exit=53116 a0=3 a1=7ffded62a9d0 a2=0 a3=7ffded62a9bc items=0 ppid=3943 pid=4167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:08.840000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:56:09.418005 containerd[1617]: time="2025-12-16T12:56:09.417725721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f6f6f8cd5-nxqwt,Uid:4d5fd089-d56c-460c-b006-cc36a126ec32,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:56:09.575689 systemd-networkd[1510]: calia619822d329: Link UP Dec 16 12:56:09.577080 systemd-networkd[1510]: calia619822d329: Gained carrier Dec 16 12:56:09.602101 containerd[1617]: 2025-12-16 12:56:09.476 [INFO][4180] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--3--ef2be4b8ba-k8s-calico--apiserver--5f6f6f8cd5--nxqwt-eth0 calico-apiserver-5f6f6f8cd5- calico-apiserver 4d5fd089-d56c-460c-b006-cc36a126ec32 908 0 2025-12-16 12:55:42 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5f6f6f8cd5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4515.1.0-3-ef2be4b8ba calico-apiserver-5f6f6f8cd5-nxqwt eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia619822d329 [] [] }} ContainerID="b74ec31cfc01befcf2c1652a943e7e3470742242dde09ffcc1ac4141ee493124" Namespace="calico-apiserver" Pod="calico-apiserver-5f6f6f8cd5-nxqwt" WorkloadEndpoint="ci--4515.1.0--3--ef2be4b8ba-k8s-calico--apiserver--5f6f6f8cd5--nxqwt-" Dec 16 12:56:09.602101 containerd[1617]: 2025-12-16 12:56:09.476 [INFO][4180] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b74ec31cfc01befcf2c1652a943e7e3470742242dde09ffcc1ac4141ee493124" Namespace="calico-apiserver" Pod="calico-apiserver-5f6f6f8cd5-nxqwt" WorkloadEndpoint="ci--4515.1.0--3--ef2be4b8ba-k8s-calico--apiserver--5f6f6f8cd5--nxqwt-eth0" Dec 16 12:56:09.602101 containerd[1617]: 2025-12-16 12:56:09.514 [INFO][4193] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b74ec31cfc01befcf2c1652a943e7e3470742242dde09ffcc1ac4141ee493124" HandleID="k8s-pod-network.b74ec31cfc01befcf2c1652a943e7e3470742242dde09ffcc1ac4141ee493124" Workload="ci--4515.1.0--3--ef2be4b8ba-k8s-calico--apiserver--5f6f6f8cd5--nxqwt-eth0" Dec 16 12:56:09.602380 containerd[1617]: 2025-12-16 12:56:09.514 [INFO][4193] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b74ec31cfc01befcf2c1652a943e7e3470742242dde09ffcc1ac4141ee493124" HandleID="k8s-pod-network.b74ec31cfc01befcf2c1652a943e7e3470742242dde09ffcc1ac4141ee493124" Workload="ci--4515.1.0--3--ef2be4b8ba-k8s-calico--apiserver--5f6f6f8cd5--nxqwt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d57b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4515.1.0-3-ef2be4b8ba", "pod":"calico-apiserver-5f6f6f8cd5-nxqwt", "timestamp":"2025-12-16 12:56:09.514091017 +0000 UTC"}, Hostname:"ci-4515.1.0-3-ef2be4b8ba", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:56:09.602380 containerd[1617]: 2025-12-16 12:56:09.514 [INFO][4193] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:56:09.602380 containerd[1617]: 2025-12-16 12:56:09.514 [INFO][4193] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:56:09.602380 containerd[1617]: 2025-12-16 12:56:09.514 [INFO][4193] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-3-ef2be4b8ba' Dec 16 12:56:09.602380 containerd[1617]: 2025-12-16 12:56:09.524 [INFO][4193] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b74ec31cfc01befcf2c1652a943e7e3470742242dde09ffcc1ac4141ee493124" host="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:56:09.602380 containerd[1617]: 2025-12-16 12:56:09.530 [INFO][4193] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:56:09.602380 containerd[1617]: 2025-12-16 12:56:09.538 [INFO][4193] ipam/ipam.go 511: Trying affinity for 192.168.79.128/26 host="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:56:09.602380 containerd[1617]: 2025-12-16 12:56:09.542 [INFO][4193] ipam/ipam.go 158: Attempting to load block cidr=192.168.79.128/26 host="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:56:09.602380 containerd[1617]: 2025-12-16 12:56:09.546 [INFO][4193] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.79.128/26 host="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:56:09.602625 containerd[1617]: 2025-12-16 12:56:09.546 [INFO][4193] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.79.128/26 handle="k8s-pod-network.b74ec31cfc01befcf2c1652a943e7e3470742242dde09ffcc1ac4141ee493124" host="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:56:09.602625 containerd[1617]: 2025-12-16 12:56:09.551 [INFO][4193] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b74ec31cfc01befcf2c1652a943e7e3470742242dde09ffcc1ac4141ee493124 Dec 16 12:56:09.602625 containerd[1617]: 2025-12-16 12:56:09.558 [INFO][4193] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.79.128/26 handle="k8s-pod-network.b74ec31cfc01befcf2c1652a943e7e3470742242dde09ffcc1ac4141ee493124" host="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:56:09.602625 containerd[1617]: 2025-12-16 12:56:09.566 [INFO][4193] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.79.130/26] block=192.168.79.128/26 handle="k8s-pod-network.b74ec31cfc01befcf2c1652a943e7e3470742242dde09ffcc1ac4141ee493124" host="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:56:09.602625 containerd[1617]: 2025-12-16 12:56:09.566 [INFO][4193] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.79.130/26] handle="k8s-pod-network.b74ec31cfc01befcf2c1652a943e7e3470742242dde09ffcc1ac4141ee493124" host="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:56:09.602625 containerd[1617]: 2025-12-16 12:56:09.567 [INFO][4193] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:56:09.602625 containerd[1617]: 2025-12-16 12:56:09.567 [INFO][4193] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.79.130/26] IPv6=[] ContainerID="b74ec31cfc01befcf2c1652a943e7e3470742242dde09ffcc1ac4141ee493124" HandleID="k8s-pod-network.b74ec31cfc01befcf2c1652a943e7e3470742242dde09ffcc1ac4141ee493124" Workload="ci--4515.1.0--3--ef2be4b8ba-k8s-calico--apiserver--5f6f6f8cd5--nxqwt-eth0" Dec 16 12:56:09.603110 containerd[1617]: 2025-12-16 12:56:09.571 [INFO][4180] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b74ec31cfc01befcf2c1652a943e7e3470742242dde09ffcc1ac4141ee493124" Namespace="calico-apiserver" Pod="calico-apiserver-5f6f6f8cd5-nxqwt" WorkloadEndpoint="ci--4515.1.0--3--ef2be4b8ba-k8s-calico--apiserver--5f6f6f8cd5--nxqwt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--3--ef2be4b8ba-k8s-calico--apiserver--5f6f6f8cd5--nxqwt-eth0", GenerateName:"calico-apiserver-5f6f6f8cd5-", Namespace:"calico-apiserver", SelfLink:"", UID:"4d5fd089-d56c-460c-b006-cc36a126ec32", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 55, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f6f6f8cd5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-3-ef2be4b8ba", ContainerID:"", Pod:"calico-apiserver-5f6f6f8cd5-nxqwt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.79.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia619822d329", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:56:09.603188 containerd[1617]: 2025-12-16 12:56:09.571 [INFO][4180] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.79.130/32] ContainerID="b74ec31cfc01befcf2c1652a943e7e3470742242dde09ffcc1ac4141ee493124" Namespace="calico-apiserver" Pod="calico-apiserver-5f6f6f8cd5-nxqwt" WorkloadEndpoint="ci--4515.1.0--3--ef2be4b8ba-k8s-calico--apiserver--5f6f6f8cd5--nxqwt-eth0" Dec 16 12:56:09.603188 containerd[1617]: 2025-12-16 12:56:09.571 [INFO][4180] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia619822d329 ContainerID="b74ec31cfc01befcf2c1652a943e7e3470742242dde09ffcc1ac4141ee493124" Namespace="calico-apiserver" Pod="calico-apiserver-5f6f6f8cd5-nxqwt" WorkloadEndpoint="ci--4515.1.0--3--ef2be4b8ba-k8s-calico--apiserver--5f6f6f8cd5--nxqwt-eth0" Dec 16 12:56:09.603188 containerd[1617]: 2025-12-16 12:56:09.578 [INFO][4180] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b74ec31cfc01befcf2c1652a943e7e3470742242dde09ffcc1ac4141ee493124" Namespace="calico-apiserver" Pod="calico-apiserver-5f6f6f8cd5-nxqwt" WorkloadEndpoint="ci--4515.1.0--3--ef2be4b8ba-k8s-calico--apiserver--5f6f6f8cd5--nxqwt-eth0" Dec 16 12:56:09.603267 containerd[1617]: 2025-12-16 12:56:09.579 [INFO][4180] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b74ec31cfc01befcf2c1652a943e7e3470742242dde09ffcc1ac4141ee493124" Namespace="calico-apiserver" Pod="calico-apiserver-5f6f6f8cd5-nxqwt" WorkloadEndpoint="ci--4515.1.0--3--ef2be4b8ba-k8s-calico--apiserver--5f6f6f8cd5--nxqwt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--3--ef2be4b8ba-k8s-calico--apiserver--5f6f6f8cd5--nxqwt-eth0", GenerateName:"calico-apiserver-5f6f6f8cd5-", Namespace:"calico-apiserver", SelfLink:"", UID:"4d5fd089-d56c-460c-b006-cc36a126ec32", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 55, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f6f6f8cd5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-3-ef2be4b8ba", ContainerID:"b74ec31cfc01befcf2c1652a943e7e3470742242dde09ffcc1ac4141ee493124", Pod:"calico-apiserver-5f6f6f8cd5-nxqwt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.79.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia619822d329", MAC:"2e:68:c5:ac:e7:a4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:56:09.603342 containerd[1617]: 2025-12-16 12:56:09.597 [INFO][4180] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b74ec31cfc01befcf2c1652a943e7e3470742242dde09ffcc1ac4141ee493124" Namespace="calico-apiserver" Pod="calico-apiserver-5f6f6f8cd5-nxqwt" WorkloadEndpoint="ci--4515.1.0--3--ef2be4b8ba-k8s-calico--apiserver--5f6f6f8cd5--nxqwt-eth0" Dec 16 12:56:09.625000 audit[4210]: NETFILTER_CFG table=filter:125 family=2 entries=50 op=nft_register_chain pid=4210 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:56:09.625000 audit[4210]: SYSCALL arch=c000003e syscall=46 success=yes exit=28208 a0=3 a1=7fff8cde1b80 a2=0 a3=7fff8cde1b6c items=0 ppid=3943 pid=4210 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:09.625000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:56:09.628676 containerd[1617]: time="2025-12-16T12:56:09.628617566Z" level=info msg="connecting to shim b74ec31cfc01befcf2c1652a943e7e3470742242dde09ffcc1ac4141ee493124" address="unix:///run/containerd/s/7bacdaa0ef13b4014f62fe7b921506778b56bb91a7055942052cbd5810c25e63" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:56:09.668945 systemd[1]: Started cri-containerd-b74ec31cfc01befcf2c1652a943e7e3470742242dde09ffcc1ac4141ee493124.scope - libcontainer container b74ec31cfc01befcf2c1652a943e7e3470742242dde09ffcc1ac4141ee493124. Dec 16 12:56:09.688000 audit: BPF prog-id=214 op=LOAD Dec 16 12:56:09.689000 audit: BPF prog-id=215 op=LOAD Dec 16 12:56:09.689000 audit[4226]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=4215 pid=4226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:09.689000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237346563333163666330316265666366326331363532613934336537 Dec 16 12:56:09.689000 audit: BPF prog-id=215 op=UNLOAD Dec 16 12:56:09.689000 audit[4226]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4215 pid=4226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:09.689000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237346563333163666330316265666366326331363532613934336537 Dec 16 12:56:09.689000 audit: BPF prog-id=216 op=LOAD Dec 16 12:56:09.689000 audit[4226]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=4215 pid=4226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:09.689000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237346563333163666330316265666366326331363532613934336537 Dec 16 12:56:09.689000 audit: BPF prog-id=217 op=LOAD Dec 16 12:56:09.689000 audit[4226]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=4215 pid=4226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:09.689000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237346563333163666330316265666366326331363532613934336537 Dec 16 12:56:09.689000 audit: BPF prog-id=217 op=UNLOAD Dec 16 12:56:09.689000 audit[4226]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4215 pid=4226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:09.689000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237346563333163666330316265666366326331363532613934336537 Dec 16 12:56:09.689000 audit: BPF prog-id=216 op=UNLOAD Dec 16 12:56:09.689000 audit[4226]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4215 pid=4226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:09.689000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237346563333163666330316265666366326331363532613934336537 Dec 16 12:56:09.689000 audit: BPF prog-id=218 op=LOAD Dec 16 12:56:09.689000 audit[4226]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=4215 pid=4226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:09.689000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237346563333163666330316265666366326331363532613934336537 Dec 16 12:56:09.710653 kubelet[2789]: E1216 12:56:09.710508 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-675645944b-fkqz9" podUID="dd978ccd-4987-4640-957a-11962c9801ea" Dec 16 12:56:09.713955 systemd-networkd[1510]: vxlan.calico: Gained IPv6LL Dec 16 12:56:09.773000 audit[4253]: NETFILTER_CFG table=filter:126 family=2 entries=20 op=nft_register_rule pid=4253 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:56:09.773000 audit[4253]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff3ee24670 a2=0 a3=7fff3ee2465c items=0 ppid=2936 pid=4253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:09.773000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:56:09.777000 audit[4253]: NETFILTER_CFG table=nat:127 family=2 entries=14 op=nft_register_rule pid=4253 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:56:09.777000 audit[4253]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fff3ee24670 a2=0 a3=0 items=0 ppid=2936 pid=4253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:09.777000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:56:09.780000 containerd[1617]: time="2025-12-16T12:56:09.779122140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f6f6f8cd5-nxqwt,Uid:4d5fd089-d56c-460c-b006-cc36a126ec32,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"b74ec31cfc01befcf2c1652a943e7e3470742242dde09ffcc1ac4141ee493124\"" Dec 16 12:56:09.781526 containerd[1617]: time="2025-12-16T12:56:09.781420168Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:56:09.827981 kubelet[2789]: I1216 12:56:09.827803 2789 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 12:56:09.830145 kubelet[2789]: E1216 12:56:09.829674 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 12:56:10.129922 containerd[1617]: time="2025-12-16T12:56:10.129855049Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:56:10.133485 containerd[1617]: time="2025-12-16T12:56:10.133336953Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:56:10.134840 containerd[1617]: time="2025-12-16T12:56:10.133378603Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:56:10.135502 kubelet[2789]: E1216 12:56:10.135445 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:56:10.137667 kubelet[2789]: E1216 12:56:10.137432 2789 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:56:10.139617 kubelet[2789]: E1216 12:56:10.138860 2789 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l6wnn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5f6f6f8cd5-nxqwt_calico-apiserver(4d5fd089-d56c-460c-b006-cc36a126ec32): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:56:10.142175 kubelet[2789]: E1216 12:56:10.142063 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f6f6f8cd5-nxqwt" podUID="4d5fd089-d56c-460c-b006-cc36a126ec32" Dec 16 12:56:10.419105 containerd[1617]: time="2025-12-16T12:56:10.418966293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f6f6f8cd5-qx8fz,Uid:5a9eaa6d-ffc5-496a-b44d-d7e196b6b18c,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:56:10.420314 containerd[1617]: time="2025-12-16T12:56:10.420265510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-hvm4l,Uid:9d74f6fb-d2c4-41ff-9241-88dfaec31538,Namespace:calico-system,Attempt:0,}" Dec 16 12:56:10.641056 systemd-networkd[1510]: cali1ce36baf194: Link UP Dec 16 12:56:10.642533 systemd-networkd[1510]: cali1ce36baf194: Gained carrier Dec 16 12:56:10.665409 containerd[1617]: 2025-12-16 12:56:10.502 [INFO][4308] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--3--ef2be4b8ba-k8s-calico--apiserver--5f6f6f8cd5--qx8fz-eth0 calico-apiserver-5f6f6f8cd5- calico-apiserver 5a9eaa6d-ffc5-496a-b44d-d7e196b6b18c 911 0 2025-12-16 12:55:42 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5f6f6f8cd5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4515.1.0-3-ef2be4b8ba calico-apiserver-5f6f6f8cd5-qx8fz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1ce36baf194 [] [] }} ContainerID="96933cc3e17aca0557540e9e2ba7e82c2b66e6d4913503740efb3c45c501545f" Namespace="calico-apiserver" Pod="calico-apiserver-5f6f6f8cd5-qx8fz" WorkloadEndpoint="ci--4515.1.0--3--ef2be4b8ba-k8s-calico--apiserver--5f6f6f8cd5--qx8fz-" Dec 16 12:56:10.665409 containerd[1617]: 2025-12-16 12:56:10.502 [INFO][4308] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="96933cc3e17aca0557540e9e2ba7e82c2b66e6d4913503740efb3c45c501545f" Namespace="calico-apiserver" Pod="calico-apiserver-5f6f6f8cd5-qx8fz" WorkloadEndpoint="ci--4515.1.0--3--ef2be4b8ba-k8s-calico--apiserver--5f6f6f8cd5--qx8fz-eth0" Dec 16 12:56:10.665409 containerd[1617]: 2025-12-16 12:56:10.557 [INFO][4327] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="96933cc3e17aca0557540e9e2ba7e82c2b66e6d4913503740efb3c45c501545f" HandleID="k8s-pod-network.96933cc3e17aca0557540e9e2ba7e82c2b66e6d4913503740efb3c45c501545f" Workload="ci--4515.1.0--3--ef2be4b8ba-k8s-calico--apiserver--5f6f6f8cd5--qx8fz-eth0" Dec 16 12:56:10.666760 containerd[1617]: 2025-12-16 12:56:10.558 [INFO][4327] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="96933cc3e17aca0557540e9e2ba7e82c2b66e6d4913503740efb3c45c501545f" HandleID="k8s-pod-network.96933cc3e17aca0557540e9e2ba7e82c2b66e6d4913503740efb3c45c501545f" Workload="ci--4515.1.0--3--ef2be4b8ba-k8s-calico--apiserver--5f6f6f8cd5--qx8fz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f860), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4515.1.0-3-ef2be4b8ba", "pod":"calico-apiserver-5f6f6f8cd5-qx8fz", "timestamp":"2025-12-16 12:56:10.55758566 +0000 UTC"}, Hostname:"ci-4515.1.0-3-ef2be4b8ba", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:56:10.666760 containerd[1617]: 2025-12-16 12:56:10.558 [INFO][4327] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:56:10.666760 containerd[1617]: 2025-12-16 12:56:10.558 [INFO][4327] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:56:10.666760 containerd[1617]: 2025-12-16 12:56:10.558 [INFO][4327] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-3-ef2be4b8ba' Dec 16 12:56:10.666760 containerd[1617]: 2025-12-16 12:56:10.570 [INFO][4327] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.96933cc3e17aca0557540e9e2ba7e82c2b66e6d4913503740efb3c45c501545f" host="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:56:10.666760 containerd[1617]: 2025-12-16 12:56:10.579 [INFO][4327] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:56:10.666760 containerd[1617]: 2025-12-16 12:56:10.595 [INFO][4327] ipam/ipam.go 511: Trying affinity for 192.168.79.128/26 host="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:56:10.666760 containerd[1617]: 2025-12-16 12:56:10.598 [INFO][4327] ipam/ipam.go 158: Attempting to load block cidr=192.168.79.128/26 host="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:56:10.666760 containerd[1617]: 2025-12-16 12:56:10.603 [INFO][4327] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.79.128/26 host="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:56:10.667047 containerd[1617]: 2025-12-16 12:56:10.604 [INFO][4327] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.79.128/26 handle="k8s-pod-network.96933cc3e17aca0557540e9e2ba7e82c2b66e6d4913503740efb3c45c501545f" host="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:56:10.667047 containerd[1617]: 2025-12-16 12:56:10.607 [INFO][4327] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.96933cc3e17aca0557540e9e2ba7e82c2b66e6d4913503740efb3c45c501545f Dec 16 12:56:10.667047 containerd[1617]: 2025-12-16 12:56:10.614 [INFO][4327] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.79.128/26 handle="k8s-pod-network.96933cc3e17aca0557540e9e2ba7e82c2b66e6d4913503740efb3c45c501545f" host="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:56:10.667047 containerd[1617]: 2025-12-16 12:56:10.626 [INFO][4327] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.79.131/26] block=192.168.79.128/26 handle="k8s-pod-network.96933cc3e17aca0557540e9e2ba7e82c2b66e6d4913503740efb3c45c501545f" host="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:56:10.667047 containerd[1617]: 2025-12-16 12:56:10.626 [INFO][4327] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.79.131/26] handle="k8s-pod-network.96933cc3e17aca0557540e9e2ba7e82c2b66e6d4913503740efb3c45c501545f" host="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:56:10.667047 containerd[1617]: 2025-12-16 12:56:10.627 [INFO][4327] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:56:10.667047 containerd[1617]: 2025-12-16 12:56:10.627 [INFO][4327] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.79.131/26] IPv6=[] ContainerID="96933cc3e17aca0557540e9e2ba7e82c2b66e6d4913503740efb3c45c501545f" HandleID="k8s-pod-network.96933cc3e17aca0557540e9e2ba7e82c2b66e6d4913503740efb3c45c501545f" Workload="ci--4515.1.0--3--ef2be4b8ba-k8s-calico--apiserver--5f6f6f8cd5--qx8fz-eth0" Dec 16 12:56:10.667343 containerd[1617]: 2025-12-16 12:56:10.633 [INFO][4308] cni-plugin/k8s.go 418: Populated endpoint ContainerID="96933cc3e17aca0557540e9e2ba7e82c2b66e6d4913503740efb3c45c501545f" Namespace="calico-apiserver" Pod="calico-apiserver-5f6f6f8cd5-qx8fz" WorkloadEndpoint="ci--4515.1.0--3--ef2be4b8ba-k8s-calico--apiserver--5f6f6f8cd5--qx8fz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--3--ef2be4b8ba-k8s-calico--apiserver--5f6f6f8cd5--qx8fz-eth0", GenerateName:"calico-apiserver-5f6f6f8cd5-", Namespace:"calico-apiserver", SelfLink:"", UID:"5a9eaa6d-ffc5-496a-b44d-d7e196b6b18c", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 55, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f6f6f8cd5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-3-ef2be4b8ba", ContainerID:"", Pod:"calico-apiserver-5f6f6f8cd5-qx8fz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.79.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1ce36baf194", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:56:10.667426 containerd[1617]: 2025-12-16 12:56:10.633 [INFO][4308] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.79.131/32] ContainerID="96933cc3e17aca0557540e9e2ba7e82c2b66e6d4913503740efb3c45c501545f" Namespace="calico-apiserver" Pod="calico-apiserver-5f6f6f8cd5-qx8fz" WorkloadEndpoint="ci--4515.1.0--3--ef2be4b8ba-k8s-calico--apiserver--5f6f6f8cd5--qx8fz-eth0" Dec 16 12:56:10.667426 containerd[1617]: 2025-12-16 12:56:10.633 [INFO][4308] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1ce36baf194 ContainerID="96933cc3e17aca0557540e9e2ba7e82c2b66e6d4913503740efb3c45c501545f" Namespace="calico-apiserver" Pod="calico-apiserver-5f6f6f8cd5-qx8fz" WorkloadEndpoint="ci--4515.1.0--3--ef2be4b8ba-k8s-calico--apiserver--5f6f6f8cd5--qx8fz-eth0" Dec 16 12:56:10.667426 containerd[1617]: 2025-12-16 12:56:10.641 [INFO][4308] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="96933cc3e17aca0557540e9e2ba7e82c2b66e6d4913503740efb3c45c501545f" Namespace="calico-apiserver" Pod="calico-apiserver-5f6f6f8cd5-qx8fz" WorkloadEndpoint="ci--4515.1.0--3--ef2be4b8ba-k8s-calico--apiserver--5f6f6f8cd5--qx8fz-eth0" Dec 16 12:56:10.667499 containerd[1617]: 2025-12-16 12:56:10.646 [INFO][4308] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="96933cc3e17aca0557540e9e2ba7e82c2b66e6d4913503740efb3c45c501545f" Namespace="calico-apiserver" Pod="calico-apiserver-5f6f6f8cd5-qx8fz" WorkloadEndpoint="ci--4515.1.0--3--ef2be4b8ba-k8s-calico--apiserver--5f6f6f8cd5--qx8fz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--3--ef2be4b8ba-k8s-calico--apiserver--5f6f6f8cd5--qx8fz-eth0", GenerateName:"calico-apiserver-5f6f6f8cd5-", Namespace:"calico-apiserver", SelfLink:"", UID:"5a9eaa6d-ffc5-496a-b44d-d7e196b6b18c", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 55, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f6f6f8cd5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-3-ef2be4b8ba", ContainerID:"96933cc3e17aca0557540e9e2ba7e82c2b66e6d4913503740efb3c45c501545f", Pod:"calico-apiserver-5f6f6f8cd5-qx8fz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.79.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1ce36baf194", MAC:"92:0c:42:bf:14:8a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:56:10.667560 containerd[1617]: 2025-12-16 12:56:10.662 [INFO][4308] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="96933cc3e17aca0557540e9e2ba7e82c2b66e6d4913503740efb3c45c501545f" Namespace="calico-apiserver" Pod="calico-apiserver-5f6f6f8cd5-qx8fz" WorkloadEndpoint="ci--4515.1.0--3--ef2be4b8ba-k8s-calico--apiserver--5f6f6f8cd5--qx8fz-eth0" Dec 16 12:56:10.695000 audit[4348]: NETFILTER_CFG table=filter:128 family=2 entries=41 op=nft_register_chain pid=4348 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:56:10.699823 kernel: kauditd_printk_skb: 256 callbacks suppressed Dec 16 12:56:10.699892 kernel: audit: type=1325 audit(1765889770.695:670): table=filter:128 family=2 entries=41 op=nft_register_chain pid=4348 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:56:10.695000 audit[4348]: SYSCALL arch=c000003e syscall=46 success=yes exit=23076 a0=3 a1=7ffe60775da0 a2=0 a3=7ffe60775d8c items=0 ppid=3943 pid=4348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:10.715661 kernel: audit: type=1300 audit(1765889770.695:670): arch=c000003e syscall=46 success=yes exit=23076 a0=3 a1=7ffe60775da0 a2=0 a3=7ffe60775d8c items=0 ppid=3943 pid=4348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:10.695000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:56:10.722829 kernel: audit: type=1327 audit(1765889770.695:670): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:56:10.729702 containerd[1617]: time="2025-12-16T12:56:10.729620481Z" level=info msg="connecting to shim 96933cc3e17aca0557540e9e2ba7e82c2b66e6d4913503740efb3c45c501545f" address="unix:///run/containerd/s/422c75a45595474835808550c662f362c63a115d3533b2f5775418160a757a14" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:56:10.732544 kubelet[2789]: E1216 12:56:10.732340 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 12:56:10.736257 kubelet[2789]: E1216 12:56:10.736155 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f6f6f8cd5-nxqwt" podUID="4d5fd089-d56c-460c-b006-cc36a126ec32" Dec 16 12:56:10.799033 systemd[1]: Started cri-containerd-96933cc3e17aca0557540e9e2ba7e82c2b66e6d4913503740efb3c45c501545f.scope - libcontainer container 96933cc3e17aca0557540e9e2ba7e82c2b66e6d4913503740efb3c45c501545f. Dec 16 12:56:10.818800 systemd-networkd[1510]: calic6f2cde6ca2: Link UP Dec 16 12:56:10.821731 systemd-networkd[1510]: calic6f2cde6ca2: Gained carrier Dec 16 12:56:10.849107 containerd[1617]: 2025-12-16 12:56:10.511 [INFO][4303] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--3--ef2be4b8ba-k8s-goldmane--666569f655--hvm4l-eth0 goldmane-666569f655- calico-system 9d74f6fb-d2c4-41ff-9241-88dfaec31538 909 0 2025-12-16 12:55:45 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4515.1.0-3-ef2be4b8ba goldmane-666569f655-hvm4l eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calic6f2cde6ca2 [] [] }} ContainerID="8b705ab913b9e434280bcec155baec904ecdb965395238e752553314079fefb0" Namespace="calico-system" Pod="goldmane-666569f655-hvm4l" WorkloadEndpoint="ci--4515.1.0--3--ef2be4b8ba-k8s-goldmane--666569f655--hvm4l-" Dec 16 12:56:10.849107 containerd[1617]: 2025-12-16 12:56:10.511 [INFO][4303] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8b705ab913b9e434280bcec155baec904ecdb965395238e752553314079fefb0" Namespace="calico-system" Pod="goldmane-666569f655-hvm4l" WorkloadEndpoint="ci--4515.1.0--3--ef2be4b8ba-k8s-goldmane--666569f655--hvm4l-eth0" Dec 16 12:56:10.849107 containerd[1617]: 2025-12-16 12:56:10.570 [INFO][4332] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8b705ab913b9e434280bcec155baec904ecdb965395238e752553314079fefb0" HandleID="k8s-pod-network.8b705ab913b9e434280bcec155baec904ecdb965395238e752553314079fefb0" Workload="ci--4515.1.0--3--ef2be4b8ba-k8s-goldmane--666569f655--hvm4l-eth0" Dec 16 12:56:10.849833 containerd[1617]: 2025-12-16 12:56:10.570 [INFO][4332] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8b705ab913b9e434280bcec155baec904ecdb965395238e752553314079fefb0" HandleID="k8s-pod-network.8b705ab913b9e434280bcec155baec904ecdb965395238e752553314079fefb0" Workload="ci--4515.1.0--3--ef2be4b8ba-k8s-goldmane--666569f655--hvm4l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d57b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515.1.0-3-ef2be4b8ba", "pod":"goldmane-666569f655-hvm4l", "timestamp":"2025-12-16 12:56:10.570687477 +0000 UTC"}, Hostname:"ci-4515.1.0-3-ef2be4b8ba", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:56:10.849833 containerd[1617]: 2025-12-16 12:56:10.570 [INFO][4332] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:56:10.849833 containerd[1617]: 2025-12-16 12:56:10.627 [INFO][4332] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:56:10.849833 containerd[1617]: 2025-12-16 12:56:10.628 [INFO][4332] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-3-ef2be4b8ba' Dec 16 12:56:10.849833 containerd[1617]: 2025-12-16 12:56:10.673 [INFO][4332] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8b705ab913b9e434280bcec155baec904ecdb965395238e752553314079fefb0" host="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:56:10.849833 containerd[1617]: 2025-12-16 12:56:10.692 [INFO][4332] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:56:10.849833 containerd[1617]: 2025-12-16 12:56:10.718 [INFO][4332] ipam/ipam.go 511: Trying affinity for 192.168.79.128/26 host="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:56:10.849833 containerd[1617]: 2025-12-16 12:56:10.728 [INFO][4332] ipam/ipam.go 158: Attempting to load block cidr=192.168.79.128/26 host="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:56:10.849833 containerd[1617]: 2025-12-16 12:56:10.736 [INFO][4332] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.79.128/26 host="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:56:10.850214 containerd[1617]: 2025-12-16 12:56:10.737 [INFO][4332] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.79.128/26 handle="k8s-pod-network.8b705ab913b9e434280bcec155baec904ecdb965395238e752553314079fefb0" host="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:56:10.850214 containerd[1617]: 2025-12-16 12:56:10.749 [INFO][4332] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8b705ab913b9e434280bcec155baec904ecdb965395238e752553314079fefb0 Dec 16 12:56:10.850214 containerd[1617]: 2025-12-16 12:56:10.777 [INFO][4332] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.79.128/26 handle="k8s-pod-network.8b705ab913b9e434280bcec155baec904ecdb965395238e752553314079fefb0" host="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:56:10.850214 containerd[1617]: 2025-12-16 12:56:10.807 [INFO][4332] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.79.132/26] block=192.168.79.128/26 handle="k8s-pod-network.8b705ab913b9e434280bcec155baec904ecdb965395238e752553314079fefb0" host="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:56:10.850214 containerd[1617]: 2025-12-16 12:56:10.807 [INFO][4332] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.79.132/26] handle="k8s-pod-network.8b705ab913b9e434280bcec155baec904ecdb965395238e752553314079fefb0" host="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:56:10.850214 containerd[1617]: 2025-12-16 12:56:10.807 [INFO][4332] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:56:10.850214 containerd[1617]: 2025-12-16 12:56:10.807 [INFO][4332] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.79.132/26] IPv6=[] ContainerID="8b705ab913b9e434280bcec155baec904ecdb965395238e752553314079fefb0" HandleID="k8s-pod-network.8b705ab913b9e434280bcec155baec904ecdb965395238e752553314079fefb0" Workload="ci--4515.1.0--3--ef2be4b8ba-k8s-goldmane--666569f655--hvm4l-eth0" Dec 16 12:56:10.850412 containerd[1617]: 2025-12-16 12:56:10.811 [INFO][4303] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8b705ab913b9e434280bcec155baec904ecdb965395238e752553314079fefb0" Namespace="calico-system" Pod="goldmane-666569f655-hvm4l" WorkloadEndpoint="ci--4515.1.0--3--ef2be4b8ba-k8s-goldmane--666569f655--hvm4l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--3--ef2be4b8ba-k8s-goldmane--666569f655--hvm4l-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"9d74f6fb-d2c4-41ff-9241-88dfaec31538", ResourceVersion:"909", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 55, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-3-ef2be4b8ba", ContainerID:"", Pod:"goldmane-666569f655-hvm4l", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.79.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic6f2cde6ca2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:56:10.850542 containerd[1617]: 2025-12-16 12:56:10.811 [INFO][4303] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.79.132/32] ContainerID="8b705ab913b9e434280bcec155baec904ecdb965395238e752553314079fefb0" Namespace="calico-system" Pod="goldmane-666569f655-hvm4l" WorkloadEndpoint="ci--4515.1.0--3--ef2be4b8ba-k8s-goldmane--666569f655--hvm4l-eth0" Dec 16 12:56:10.850542 containerd[1617]: 2025-12-16 12:56:10.811 [INFO][4303] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic6f2cde6ca2 ContainerID="8b705ab913b9e434280bcec155baec904ecdb965395238e752553314079fefb0" Namespace="calico-system" Pod="goldmane-666569f655-hvm4l" WorkloadEndpoint="ci--4515.1.0--3--ef2be4b8ba-k8s-goldmane--666569f655--hvm4l-eth0" Dec 16 12:56:10.850542 containerd[1617]: 2025-12-16 12:56:10.821 [INFO][4303] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8b705ab913b9e434280bcec155baec904ecdb965395238e752553314079fefb0" Namespace="calico-system" Pod="goldmane-666569f655-hvm4l" WorkloadEndpoint="ci--4515.1.0--3--ef2be4b8ba-k8s-goldmane--666569f655--hvm4l-eth0" Dec 16 12:56:10.850703 containerd[1617]: 2025-12-16 12:56:10.821 [INFO][4303] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8b705ab913b9e434280bcec155baec904ecdb965395238e752553314079fefb0" Namespace="calico-system" Pod="goldmane-666569f655-hvm4l" WorkloadEndpoint="ci--4515.1.0--3--ef2be4b8ba-k8s-goldmane--666569f655--hvm4l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--3--ef2be4b8ba-k8s-goldmane--666569f655--hvm4l-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"9d74f6fb-d2c4-41ff-9241-88dfaec31538", ResourceVersion:"909", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 55, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-3-ef2be4b8ba", ContainerID:"8b705ab913b9e434280bcec155baec904ecdb965395238e752553314079fefb0", Pod:"goldmane-666569f655-hvm4l", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.79.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic6f2cde6ca2", MAC:"3a:4c:0e:85:b7:18", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:56:10.850769 containerd[1617]: 2025-12-16 12:56:10.842 [INFO][4303] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8b705ab913b9e434280bcec155baec904ecdb965395238e752553314079fefb0" Namespace="calico-system" Pod="goldmane-666569f655-hvm4l" WorkloadEndpoint="ci--4515.1.0--3--ef2be4b8ba-k8s-goldmane--666569f655--hvm4l-eth0" Dec 16 12:56:10.854000 audit[4391]: NETFILTER_CFG table=filter:129 family=2 entries=20 op=nft_register_rule pid=4391 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:56:10.859689 kernel: audit: type=1325 audit(1765889770.854:671): table=filter:129 family=2 entries=20 op=nft_register_rule pid=4391 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:56:10.854000 audit[4391]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff55844d80 a2=0 a3=7fff55844d6c items=0 ppid=2936 pid=4391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:10.865755 kernel: audit: type=1300 audit(1765889770.854:671): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff55844d80 a2=0 a3=7fff55844d6c items=0 ppid=2936 pid=4391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:10.854000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:56:10.874751 kernel: audit: type=1327 audit(1765889770.854:671): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:56:10.859000 audit[4391]: NETFILTER_CFG table=nat:130 family=2 entries=14 op=nft_register_rule pid=4391 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:56:10.878656 kernel: audit: type=1325 audit(1765889770.859:672): table=nat:130 family=2 entries=14 op=nft_register_rule pid=4391 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:56:10.859000 audit[4391]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fff55844d80 a2=0 a3=0 items=0 ppid=2936 pid=4391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:10.885796 kernel: audit: type=1300 audit(1765889770.859:672): arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fff55844d80 a2=0 a3=0 items=0 ppid=2936 pid=4391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:10.893691 kernel: audit: type=1327 audit(1765889770.859:672): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:56:10.859000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:56:10.886000 audit: BPF prog-id=219 op=LOAD Dec 16 12:56:10.896658 kernel: audit: type=1334 audit(1765889770.886:673): prog-id=219 op=LOAD Dec 16 12:56:10.887000 audit: BPF prog-id=220 op=LOAD Dec 16 12:56:10.887000 audit[4369]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=4358 pid=4369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:10.887000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936393333636333653137616361303535373534306539653262613765 Dec 16 12:56:10.887000 audit: BPF prog-id=220 op=UNLOAD Dec 16 12:56:10.887000 audit[4369]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4358 pid=4369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:10.887000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936393333636333653137616361303535373534306539653262613765 Dec 16 12:56:10.887000 audit: BPF prog-id=221 op=LOAD Dec 16 12:56:10.887000 audit[4369]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=4358 pid=4369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:10.887000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936393333636333653137616361303535373534306539653262613765 Dec 16 12:56:10.887000 audit: BPF prog-id=222 op=LOAD Dec 16 12:56:10.887000 audit[4369]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=4358 pid=4369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:10.887000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936393333636333653137616361303535373534306539653262613765 Dec 16 12:56:10.887000 audit: BPF prog-id=222 op=UNLOAD Dec 16 12:56:10.887000 audit[4369]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4358 pid=4369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:10.887000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936393333636333653137616361303535373534306539653262613765 Dec 16 12:56:10.887000 audit: BPF prog-id=221 op=UNLOAD Dec 16 12:56:10.887000 audit[4369]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4358 pid=4369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:10.887000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936393333636333653137616361303535373534306539653262613765 Dec 16 12:56:10.888000 audit: BPF prog-id=223 op=LOAD Dec 16 12:56:10.888000 audit[4369]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=4358 pid=4369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:10.888000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936393333636333653137616361303535373534306539653262613765 Dec 16 12:56:10.927439 containerd[1617]: time="2025-12-16T12:56:10.927385750Z" level=info msg="connecting to shim 8b705ab913b9e434280bcec155baec904ecdb965395238e752553314079fefb0" address="unix:///run/containerd/s/f1f8de0e8ba7c273a022d84544947b1f94ec27dcca1dedb8107911359029e9ee" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:56:10.929095 systemd-networkd[1510]: calia619822d329: Gained IPv6LL Dec 16 12:56:10.999421 systemd[1]: Started cri-containerd-8b705ab913b9e434280bcec155baec904ecdb965395238e752553314079fefb0.scope - libcontainer container 8b705ab913b9e434280bcec155baec904ecdb965395238e752553314079fefb0. Dec 16 12:56:11.012000 audit[4424]: NETFILTER_CFG table=filter:131 family=2 entries=52 op=nft_register_chain pid=4424 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:56:11.012000 audit[4424]: SYSCALL arch=c000003e syscall=46 success=yes exit=27556 a0=3 a1=7fff0d903570 a2=0 a3=7fff0d90355c items=0 ppid=3943 pid=4424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:11.012000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:56:11.051000 audit: BPF prog-id=224 op=LOAD Dec 16 12:56:11.052000 audit: BPF prog-id=225 op=LOAD Dec 16 12:56:11.052000 audit[4418]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4407 pid=4418 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:11.052000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862373035616239313362396534333432383062636563313535626165 Dec 16 12:56:11.052000 audit: BPF prog-id=225 op=UNLOAD Dec 16 12:56:11.052000 audit[4418]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4407 pid=4418 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:11.052000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862373035616239313362396534333432383062636563313535626165 Dec 16 12:56:11.053000 audit: BPF prog-id=226 op=LOAD Dec 16 12:56:11.053000 audit[4418]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4407 pid=4418 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:11.053000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862373035616239313362396534333432383062636563313535626165 Dec 16 12:56:11.053000 audit: BPF prog-id=227 op=LOAD Dec 16 12:56:11.053000 audit[4418]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4407 pid=4418 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:11.053000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862373035616239313362396534333432383062636563313535626165 Dec 16 12:56:11.053000 audit: BPF prog-id=227 op=UNLOAD Dec 16 12:56:11.053000 audit[4418]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4407 pid=4418 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:11.053000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862373035616239313362396534333432383062636563313535626165 Dec 16 12:56:11.053000 audit: BPF prog-id=226 op=UNLOAD Dec 16 12:56:11.053000 audit[4418]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4407 pid=4418 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:11.053000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862373035616239313362396534333432383062636563313535626165 Dec 16 12:56:11.053000 audit: BPF prog-id=228 op=LOAD Dec 16 12:56:11.053000 audit[4418]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4407 pid=4418 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:11.053000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862373035616239313362396534333432383062636563313535626165 Dec 16 12:56:11.117684 containerd[1617]: time="2025-12-16T12:56:11.117599725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f6f6f8cd5-qx8fz,Uid:5a9eaa6d-ffc5-496a-b44d-d7e196b6b18c,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"96933cc3e17aca0557540e9e2ba7e82c2b66e6d4913503740efb3c45c501545f\"" Dec 16 12:56:11.121208 containerd[1617]: time="2025-12-16T12:56:11.120911143Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:56:11.166842 containerd[1617]: time="2025-12-16T12:56:11.166659901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-hvm4l,Uid:9d74f6fb-d2c4-41ff-9241-88dfaec31538,Namespace:calico-system,Attempt:0,} returns sandbox id \"8b705ab913b9e434280bcec155baec904ecdb965395238e752553314079fefb0\"" Dec 16 12:56:11.419523 containerd[1617]: time="2025-12-16T12:56:11.419310268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rmjtf,Uid:7b89c039-0754-43bd-ad85-5506dee48dad,Namespace:calico-system,Attempt:0,}" Dec 16 12:56:11.458362 containerd[1617]: time="2025-12-16T12:56:11.457907809Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:56:11.459515 containerd[1617]: time="2025-12-16T12:56:11.459396438Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:56:11.459987 containerd[1617]: time="2025-12-16T12:56:11.459856960Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:56:11.461105 kubelet[2789]: E1216 12:56:11.461036 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:56:11.461481 kubelet[2789]: E1216 12:56:11.461317 2789 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:56:11.462551 kubelet[2789]: E1216 12:56:11.461936 2789 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-plx6k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5f6f6f8cd5-qx8fz_calico-apiserver(5a9eaa6d-ffc5-496a-b44d-d7e196b6b18c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:56:11.463704 kubelet[2789]: E1216 12:56:11.463578 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f6f6f8cd5-qx8fz" podUID="5a9eaa6d-ffc5-496a-b44d-d7e196b6b18c" Dec 16 12:56:11.465774 containerd[1617]: time="2025-12-16T12:56:11.465688293Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:56:11.650820 systemd-networkd[1510]: calibfbf8f674ca: Link UP Dec 16 12:56:11.652265 systemd-networkd[1510]: calibfbf8f674ca: Gained carrier Dec 16 12:56:11.676044 containerd[1617]: 2025-12-16 12:56:11.517 [INFO][4452] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--3--ef2be4b8ba-k8s-csi--node--driver--rmjtf-eth0 csi-node-driver- calico-system 7b89c039-0754-43bd-ad85-5506dee48dad 788 0 2025-12-16 12:55:47 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4515.1.0-3-ef2be4b8ba csi-node-driver-rmjtf eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calibfbf8f674ca [] [] }} ContainerID="088e87ebaa70d205dec3c46d90b528ce6ab32917d411e5a69bd72b1b180fb283" Namespace="calico-system" Pod="csi-node-driver-rmjtf" WorkloadEndpoint="ci--4515.1.0--3--ef2be4b8ba-k8s-csi--node--driver--rmjtf-" Dec 16 12:56:11.676044 containerd[1617]: 2025-12-16 12:56:11.518 [INFO][4452] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="088e87ebaa70d205dec3c46d90b528ce6ab32917d411e5a69bd72b1b180fb283" Namespace="calico-system" Pod="csi-node-driver-rmjtf" WorkloadEndpoint="ci--4515.1.0--3--ef2be4b8ba-k8s-csi--node--driver--rmjtf-eth0" Dec 16 12:56:11.676044 containerd[1617]: 2025-12-16 12:56:11.575 [INFO][4463] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="088e87ebaa70d205dec3c46d90b528ce6ab32917d411e5a69bd72b1b180fb283" HandleID="k8s-pod-network.088e87ebaa70d205dec3c46d90b528ce6ab32917d411e5a69bd72b1b180fb283" Workload="ci--4515.1.0--3--ef2be4b8ba-k8s-csi--node--driver--rmjtf-eth0" Dec 16 12:56:11.676784 containerd[1617]: 2025-12-16 12:56:11.575 [INFO][4463] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="088e87ebaa70d205dec3c46d90b528ce6ab32917d411e5a69bd72b1b180fb283" HandleID="k8s-pod-network.088e87ebaa70d205dec3c46d90b528ce6ab32917d411e5a69bd72b1b180fb283" Workload="ci--4515.1.0--3--ef2be4b8ba-k8s-csi--node--driver--rmjtf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f8c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515.1.0-3-ef2be4b8ba", "pod":"csi-node-driver-rmjtf", "timestamp":"2025-12-16 12:56:11.575211266 +0000 UTC"}, Hostname:"ci-4515.1.0-3-ef2be4b8ba", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:56:11.676784 containerd[1617]: 2025-12-16 12:56:11.575 [INFO][4463] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:56:11.676784 containerd[1617]: 2025-12-16 12:56:11.575 [INFO][4463] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:56:11.676784 containerd[1617]: 2025-12-16 12:56:11.575 [INFO][4463] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-3-ef2be4b8ba' Dec 16 12:56:11.676784 containerd[1617]: 2025-12-16 12:56:11.586 [INFO][4463] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.088e87ebaa70d205dec3c46d90b528ce6ab32917d411e5a69bd72b1b180fb283" host="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:56:11.676784 containerd[1617]: 2025-12-16 12:56:11.600 [INFO][4463] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:56:11.676784 containerd[1617]: 2025-12-16 12:56:11.613 [INFO][4463] ipam/ipam.go 511: Trying affinity for 192.168.79.128/26 host="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:56:11.676784 containerd[1617]: 2025-12-16 12:56:11.616 [INFO][4463] ipam/ipam.go 158: Attempting to load block cidr=192.168.79.128/26 host="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:56:11.676784 containerd[1617]: 2025-12-16 12:56:11.621 [INFO][4463] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.79.128/26 host="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:56:11.677095 containerd[1617]: 2025-12-16 12:56:11.622 [INFO][4463] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.79.128/26 handle="k8s-pod-network.088e87ebaa70d205dec3c46d90b528ce6ab32917d411e5a69bd72b1b180fb283" host="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:56:11.677095 containerd[1617]: 2025-12-16 12:56:11.625 [INFO][4463] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.088e87ebaa70d205dec3c46d90b528ce6ab32917d411e5a69bd72b1b180fb283 Dec 16 12:56:11.677095 containerd[1617]: 2025-12-16 12:56:11.632 [INFO][4463] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.79.128/26 handle="k8s-pod-network.088e87ebaa70d205dec3c46d90b528ce6ab32917d411e5a69bd72b1b180fb283" host="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:56:11.677095 containerd[1617]: 2025-12-16 12:56:11.641 [INFO][4463] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.79.133/26] block=192.168.79.128/26 handle="k8s-pod-network.088e87ebaa70d205dec3c46d90b528ce6ab32917d411e5a69bd72b1b180fb283" host="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:56:11.677095 containerd[1617]: 2025-12-16 12:56:11.641 [INFO][4463] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.79.133/26] handle="k8s-pod-network.088e87ebaa70d205dec3c46d90b528ce6ab32917d411e5a69bd72b1b180fb283" host="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:56:11.677095 containerd[1617]: 2025-12-16 12:56:11.642 [INFO][4463] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:56:11.677095 containerd[1617]: 2025-12-16 12:56:11.642 [INFO][4463] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.79.133/26] IPv6=[] ContainerID="088e87ebaa70d205dec3c46d90b528ce6ab32917d411e5a69bd72b1b180fb283" HandleID="k8s-pod-network.088e87ebaa70d205dec3c46d90b528ce6ab32917d411e5a69bd72b1b180fb283" Workload="ci--4515.1.0--3--ef2be4b8ba-k8s-csi--node--driver--rmjtf-eth0" Dec 16 12:56:11.677258 containerd[1617]: 2025-12-16 12:56:11.647 [INFO][4452] cni-plugin/k8s.go 418: Populated endpoint ContainerID="088e87ebaa70d205dec3c46d90b528ce6ab32917d411e5a69bd72b1b180fb283" Namespace="calico-system" Pod="csi-node-driver-rmjtf" WorkloadEndpoint="ci--4515.1.0--3--ef2be4b8ba-k8s-csi--node--driver--rmjtf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--3--ef2be4b8ba-k8s-csi--node--driver--rmjtf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7b89c039-0754-43bd-ad85-5506dee48dad", ResourceVersion:"788", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 55, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-3-ef2be4b8ba", ContainerID:"", Pod:"csi-node-driver-rmjtf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.79.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibfbf8f674ca", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:56:11.677324 containerd[1617]: 2025-12-16 12:56:11.647 [INFO][4452] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.79.133/32] ContainerID="088e87ebaa70d205dec3c46d90b528ce6ab32917d411e5a69bd72b1b180fb283" Namespace="calico-system" Pod="csi-node-driver-rmjtf" WorkloadEndpoint="ci--4515.1.0--3--ef2be4b8ba-k8s-csi--node--driver--rmjtf-eth0" Dec 16 12:56:11.677324 containerd[1617]: 2025-12-16 12:56:11.647 [INFO][4452] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibfbf8f674ca ContainerID="088e87ebaa70d205dec3c46d90b528ce6ab32917d411e5a69bd72b1b180fb283" Namespace="calico-system" Pod="csi-node-driver-rmjtf" WorkloadEndpoint="ci--4515.1.0--3--ef2be4b8ba-k8s-csi--node--driver--rmjtf-eth0" Dec 16 12:56:11.677324 containerd[1617]: 2025-12-16 12:56:11.652 [INFO][4452] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="088e87ebaa70d205dec3c46d90b528ce6ab32917d411e5a69bd72b1b180fb283" Namespace="calico-system" Pod="csi-node-driver-rmjtf" WorkloadEndpoint="ci--4515.1.0--3--ef2be4b8ba-k8s-csi--node--driver--rmjtf-eth0" Dec 16 12:56:11.678718 containerd[1617]: 2025-12-16 12:56:11.652 [INFO][4452] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="088e87ebaa70d205dec3c46d90b528ce6ab32917d411e5a69bd72b1b180fb283" Namespace="calico-system" Pod="csi-node-driver-rmjtf" WorkloadEndpoint="ci--4515.1.0--3--ef2be4b8ba-k8s-csi--node--driver--rmjtf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--3--ef2be4b8ba-k8s-csi--node--driver--rmjtf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7b89c039-0754-43bd-ad85-5506dee48dad", ResourceVersion:"788", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 55, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-3-ef2be4b8ba", ContainerID:"088e87ebaa70d205dec3c46d90b528ce6ab32917d411e5a69bd72b1b180fb283", Pod:"csi-node-driver-rmjtf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.79.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibfbf8f674ca", MAC:"e2:35:d0:de:55:0f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:56:11.678829 containerd[1617]: 2025-12-16 12:56:11.671 [INFO][4452] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="088e87ebaa70d205dec3c46d90b528ce6ab32917d411e5a69bd72b1b180fb283" Namespace="calico-system" Pod="csi-node-driver-rmjtf" WorkloadEndpoint="ci--4515.1.0--3--ef2be4b8ba-k8s-csi--node--driver--rmjtf-eth0" Dec 16 12:56:11.713277 containerd[1617]: time="2025-12-16T12:56:11.713122720Z" level=info msg="connecting to shim 088e87ebaa70d205dec3c46d90b528ce6ab32917d411e5a69bd72b1b180fb283" address="unix:///run/containerd/s/b282bdbe70e1b8b1adf6616ae3b6522751f8a51bef4fb62406a1159e59466a65" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:56:11.748890 kubelet[2789]: E1216 12:56:11.748568 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f6f6f8cd5-qx8fz" podUID="5a9eaa6d-ffc5-496a-b44d-d7e196b6b18c" Dec 16 12:56:11.755425 kubelet[2789]: E1216 12:56:11.755325 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f6f6f8cd5-nxqwt" podUID="4d5fd089-d56c-460c-b006-cc36a126ec32" Dec 16 12:56:11.755000 audit[4500]: NETFILTER_CFG table=filter:132 family=2 entries=48 op=nft_register_chain pid=4500 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:56:11.755000 audit[4500]: SYSCALL arch=c000003e syscall=46 success=yes exit=23140 a0=3 a1=7ffffec821e0 a2=0 a3=7ffffec821cc items=0 ppid=3943 pid=4500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:11.755000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:56:11.791091 systemd[1]: Started cri-containerd-088e87ebaa70d205dec3c46d90b528ce6ab32917d411e5a69bd72b1b180fb283.scope - libcontainer container 088e87ebaa70d205dec3c46d90b528ce6ab32917d411e5a69bd72b1b180fb283. Dec 16 12:56:11.817714 containerd[1617]: time="2025-12-16T12:56:11.817660093Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:56:11.818676 containerd[1617]: time="2025-12-16T12:56:11.818533590Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:56:11.818676 containerd[1617]: time="2025-12-16T12:56:11.818642910Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 12:56:11.820000 audit: BPF prog-id=229 op=LOAD Dec 16 12:56:11.822000 audit: BPF prog-id=230 op=LOAD Dec 16 12:56:11.822000 audit[4499]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00018c238 a2=98 a3=0 items=0 ppid=4486 pid=4499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:11.822000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038386538376562616137306432303564656333633436643930623532 Dec 16 12:56:11.822000 audit: BPF prog-id=230 op=UNLOAD Dec 16 12:56:11.822000 audit[4499]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4486 pid=4499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:11.822000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038386538376562616137306432303564656333633436643930623532 Dec 16 12:56:11.822000 audit: BPF prog-id=231 op=LOAD Dec 16 12:56:11.822000 audit[4499]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00018c488 a2=98 a3=0 items=0 ppid=4486 pid=4499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:11.822000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038386538376562616137306432303564656333633436643930623532 Dec 16 12:56:11.822000 audit: BPF prog-id=232 op=LOAD Dec 16 12:56:11.822000 audit[4499]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00018c218 a2=98 a3=0 items=0 ppid=4486 pid=4499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:11.822000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038386538376562616137306432303564656333633436643930623532 Dec 16 12:56:11.823000 audit: BPF prog-id=232 op=UNLOAD Dec 16 12:56:11.823000 audit[4499]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4486 pid=4499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:11.823000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038386538376562616137306432303564656333633436643930623532 Dec 16 12:56:11.823000 audit: BPF prog-id=231 op=UNLOAD Dec 16 12:56:11.823000 audit[4499]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4486 pid=4499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:11.823000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038386538376562616137306432303564656333633436643930623532 Dec 16 12:56:11.823000 audit: BPF prog-id=233 op=LOAD Dec 16 12:56:11.823000 audit[4499]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00018c6e8 a2=98 a3=0 items=0 ppid=4486 pid=4499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:11.823000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038386538376562616137306432303564656333633436643930623532 Dec 16 12:56:11.833407 kubelet[2789]: E1216 12:56:11.833190 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:56:11.833651 kubelet[2789]: E1216 12:56:11.833559 2789 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:56:11.834327 kubelet[2789]: E1216 12:56:11.834019 2789 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vwjlh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-hvm4l_calico-system(9d74f6fb-d2c4-41ff-9241-88dfaec31538): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:56:11.835978 kubelet[2789]: E1216 12:56:11.835916 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hvm4l" podUID="9d74f6fb-d2c4-41ff-9241-88dfaec31538" Dec 16 12:56:11.849000 audit[4520]: NETFILTER_CFG table=filter:133 family=2 entries=20 op=nft_register_rule pid=4520 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:56:11.849000 audit[4520]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc67b378b0 a2=0 a3=7ffc67b3789c items=0 ppid=2936 pid=4520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:11.849000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:56:11.854000 audit[4520]: NETFILTER_CFG table=nat:134 family=2 entries=14 op=nft_register_rule pid=4520 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:56:11.854000 audit[4520]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffc67b378b0 a2=0 a3=0 items=0 ppid=2936 pid=4520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:11.854000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:56:11.876160 containerd[1617]: time="2025-12-16T12:56:11.875849978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rmjtf,Uid:7b89c039-0754-43bd-ad85-5506dee48dad,Namespace:calico-system,Attempt:0,} returns sandbox id \"088e87ebaa70d205dec3c46d90b528ce6ab32917d411e5a69bd72b1b180fb283\"" Dec 16 12:56:11.880874 containerd[1617]: time="2025-12-16T12:56:11.880833498Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:56:11.888885 systemd-networkd[1510]: calic6f2cde6ca2: Gained IPv6LL Dec 16 12:56:12.224491 containerd[1617]: time="2025-12-16T12:56:12.224237314Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:56:12.225481 containerd[1617]: time="2025-12-16T12:56:12.225329394Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:56:12.225481 containerd[1617]: time="2025-12-16T12:56:12.225380013Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 12:56:12.225889 kubelet[2789]: E1216 12:56:12.225790 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:56:12.225978 kubelet[2789]: E1216 12:56:12.225916 2789 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:56:12.226406 kubelet[2789]: E1216 12:56:12.226309 2789 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lwc8p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rmjtf_calico-system(7b89c039-0754-43bd-ad85-5506dee48dad): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:56:12.230250 containerd[1617]: time="2025-12-16T12:56:12.230210177Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:56:12.417825 containerd[1617]: time="2025-12-16T12:56:12.417763457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b46df5bf6-8vt25,Uid:eceb5ead-85dc-4ae8-98b5-b55994dab5ce,Namespace:calico-system,Attempt:0,}" Dec 16 12:56:12.465792 systemd-networkd[1510]: cali1ce36baf194: Gained IPv6LL Dec 16 12:56:12.566378 containerd[1617]: time="2025-12-16T12:56:12.566198098Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:56:12.568168 containerd[1617]: time="2025-12-16T12:56:12.568113798Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 12:56:12.568536 containerd[1617]: time="2025-12-16T12:56:12.568380014Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:56:12.569177 kubelet[2789]: E1216 12:56:12.568913 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:56:12.569279 kubelet[2789]: E1216 12:56:12.569219 2789 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:56:12.569706 kubelet[2789]: E1216 12:56:12.569437 2789 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lwc8p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rmjtf_calico-system(7b89c039-0754-43bd-ad85-5506dee48dad): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:56:12.570786 kubelet[2789]: E1216 12:56:12.570747 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rmjtf" podUID="7b89c039-0754-43bd-ad85-5506dee48dad" Dec 16 12:56:12.616120 systemd-networkd[1510]: califeb7cd0ebdc: Link UP Dec 16 12:56:12.619993 systemd-networkd[1510]: califeb7cd0ebdc: Gained carrier Dec 16 12:56:12.647522 containerd[1617]: 2025-12-16 12:56:12.497 [INFO][4527] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--3--ef2be4b8ba-k8s-calico--kube--controllers--7b46df5bf6--8vt25-eth0 calico-kube-controllers-7b46df5bf6- calico-system eceb5ead-85dc-4ae8-98b5-b55994dab5ce 912 0 2025-12-16 12:55:48 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7b46df5bf6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4515.1.0-3-ef2be4b8ba calico-kube-controllers-7b46df5bf6-8vt25 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] califeb7cd0ebdc [] [] }} ContainerID="7a945b93910ed749ed87d79a25a5a572bcba04072542960e9fdc76e980ac09ce" Namespace="calico-system" Pod="calico-kube-controllers-7b46df5bf6-8vt25" WorkloadEndpoint="ci--4515.1.0--3--ef2be4b8ba-k8s-calico--kube--controllers--7b46df5bf6--8vt25-" Dec 16 12:56:12.647522 containerd[1617]: 2025-12-16 12:56:12.497 [INFO][4527] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7a945b93910ed749ed87d79a25a5a572bcba04072542960e9fdc76e980ac09ce" Namespace="calico-system" Pod="calico-kube-controllers-7b46df5bf6-8vt25" WorkloadEndpoint="ci--4515.1.0--3--ef2be4b8ba-k8s-calico--kube--controllers--7b46df5bf6--8vt25-eth0" Dec 16 12:56:12.647522 containerd[1617]: 2025-12-16 12:56:12.550 [INFO][4539] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7a945b93910ed749ed87d79a25a5a572bcba04072542960e9fdc76e980ac09ce" HandleID="k8s-pod-network.7a945b93910ed749ed87d79a25a5a572bcba04072542960e9fdc76e980ac09ce" Workload="ci--4515.1.0--3--ef2be4b8ba-k8s-calico--kube--controllers--7b46df5bf6--8vt25-eth0" Dec 16 12:56:12.648341 containerd[1617]: 2025-12-16 12:56:12.550 [INFO][4539] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7a945b93910ed749ed87d79a25a5a572bcba04072542960e9fdc76e980ac09ce" HandleID="k8s-pod-network.7a945b93910ed749ed87d79a25a5a572bcba04072542960e9fdc76e980ac09ce" Workload="ci--4515.1.0--3--ef2be4b8ba-k8s-calico--kube--controllers--7b46df5bf6--8vt25-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f9b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515.1.0-3-ef2be4b8ba", "pod":"calico-kube-controllers-7b46df5bf6-8vt25", "timestamp":"2025-12-16 12:56:12.550293139 +0000 UTC"}, Hostname:"ci-4515.1.0-3-ef2be4b8ba", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:56:12.648341 containerd[1617]: 2025-12-16 12:56:12.550 [INFO][4539] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:56:12.648341 containerd[1617]: 2025-12-16 12:56:12.550 [INFO][4539] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:56:12.648341 containerd[1617]: 2025-12-16 12:56:12.550 [INFO][4539] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-3-ef2be4b8ba' Dec 16 12:56:12.648341 containerd[1617]: 2025-12-16 12:56:12.560 [INFO][4539] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7a945b93910ed749ed87d79a25a5a572bcba04072542960e9fdc76e980ac09ce" host="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:56:12.648341 containerd[1617]: 2025-12-16 12:56:12.571 [INFO][4539] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:56:12.648341 containerd[1617]: 2025-12-16 12:56:12.580 [INFO][4539] ipam/ipam.go 511: Trying affinity for 192.168.79.128/26 host="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:56:12.648341 containerd[1617]: 2025-12-16 12:56:12.585 [INFO][4539] ipam/ipam.go 158: Attempting to load block cidr=192.168.79.128/26 host="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:56:12.648341 containerd[1617]: 2025-12-16 12:56:12.589 [INFO][4539] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.79.128/26 host="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:56:12.649410 containerd[1617]: 2025-12-16 12:56:12.589 [INFO][4539] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.79.128/26 handle="k8s-pod-network.7a945b93910ed749ed87d79a25a5a572bcba04072542960e9fdc76e980ac09ce" host="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:56:12.649410 containerd[1617]: 2025-12-16 12:56:12.592 [INFO][4539] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7a945b93910ed749ed87d79a25a5a572bcba04072542960e9fdc76e980ac09ce Dec 16 12:56:12.649410 containerd[1617]: 2025-12-16 12:56:12.597 [INFO][4539] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.79.128/26 handle="k8s-pod-network.7a945b93910ed749ed87d79a25a5a572bcba04072542960e9fdc76e980ac09ce" host="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:56:12.649410 containerd[1617]: 2025-12-16 12:56:12.607 [INFO][4539] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.79.134/26] block=192.168.79.128/26 handle="k8s-pod-network.7a945b93910ed749ed87d79a25a5a572bcba04072542960e9fdc76e980ac09ce" host="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:56:12.649410 containerd[1617]: 2025-12-16 12:56:12.607 [INFO][4539] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.79.134/26] handle="k8s-pod-network.7a945b93910ed749ed87d79a25a5a572bcba04072542960e9fdc76e980ac09ce" host="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:56:12.649410 containerd[1617]: 2025-12-16 12:56:12.607 [INFO][4539] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:56:12.649410 containerd[1617]: 2025-12-16 12:56:12.607 [INFO][4539] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.79.134/26] IPv6=[] ContainerID="7a945b93910ed749ed87d79a25a5a572bcba04072542960e9fdc76e980ac09ce" HandleID="k8s-pod-network.7a945b93910ed749ed87d79a25a5a572bcba04072542960e9fdc76e980ac09ce" Workload="ci--4515.1.0--3--ef2be4b8ba-k8s-calico--kube--controllers--7b46df5bf6--8vt25-eth0" Dec 16 12:56:12.649610 containerd[1617]: 2025-12-16 12:56:12.612 [INFO][4527] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7a945b93910ed749ed87d79a25a5a572bcba04072542960e9fdc76e980ac09ce" Namespace="calico-system" Pod="calico-kube-controllers-7b46df5bf6-8vt25" WorkloadEndpoint="ci--4515.1.0--3--ef2be4b8ba-k8s-calico--kube--controllers--7b46df5bf6--8vt25-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--3--ef2be4b8ba-k8s-calico--kube--controllers--7b46df5bf6--8vt25-eth0", GenerateName:"calico-kube-controllers-7b46df5bf6-", Namespace:"calico-system", SelfLink:"", UID:"eceb5ead-85dc-4ae8-98b5-b55994dab5ce", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 55, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b46df5bf6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-3-ef2be4b8ba", ContainerID:"", Pod:"calico-kube-controllers-7b46df5bf6-8vt25", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.79.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califeb7cd0ebdc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:56:12.650953 containerd[1617]: 2025-12-16 12:56:12.612 [INFO][4527] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.79.134/32] ContainerID="7a945b93910ed749ed87d79a25a5a572bcba04072542960e9fdc76e980ac09ce" Namespace="calico-system" Pod="calico-kube-controllers-7b46df5bf6-8vt25" WorkloadEndpoint="ci--4515.1.0--3--ef2be4b8ba-k8s-calico--kube--controllers--7b46df5bf6--8vt25-eth0" Dec 16 12:56:12.650953 containerd[1617]: 2025-12-16 12:56:12.612 [INFO][4527] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califeb7cd0ebdc ContainerID="7a945b93910ed749ed87d79a25a5a572bcba04072542960e9fdc76e980ac09ce" Namespace="calico-system" Pod="calico-kube-controllers-7b46df5bf6-8vt25" WorkloadEndpoint="ci--4515.1.0--3--ef2be4b8ba-k8s-calico--kube--controllers--7b46df5bf6--8vt25-eth0" Dec 16 12:56:12.650953 containerd[1617]: 2025-12-16 12:56:12.617 [INFO][4527] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7a945b93910ed749ed87d79a25a5a572bcba04072542960e9fdc76e980ac09ce" Namespace="calico-system" Pod="calico-kube-controllers-7b46df5bf6-8vt25" WorkloadEndpoint="ci--4515.1.0--3--ef2be4b8ba-k8s-calico--kube--controllers--7b46df5bf6--8vt25-eth0" Dec 16 12:56:12.651064 containerd[1617]: 2025-12-16 12:56:12.621 [INFO][4527] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7a945b93910ed749ed87d79a25a5a572bcba04072542960e9fdc76e980ac09ce" Namespace="calico-system" Pod="calico-kube-controllers-7b46df5bf6-8vt25" WorkloadEndpoint="ci--4515.1.0--3--ef2be4b8ba-k8s-calico--kube--controllers--7b46df5bf6--8vt25-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--3--ef2be4b8ba-k8s-calico--kube--controllers--7b46df5bf6--8vt25-eth0", GenerateName:"calico-kube-controllers-7b46df5bf6-", Namespace:"calico-system", SelfLink:"", UID:"eceb5ead-85dc-4ae8-98b5-b55994dab5ce", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 55, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b46df5bf6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-3-ef2be4b8ba", ContainerID:"7a945b93910ed749ed87d79a25a5a572bcba04072542960e9fdc76e980ac09ce", Pod:"calico-kube-controllers-7b46df5bf6-8vt25", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.79.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califeb7cd0ebdc", MAC:"fe:ff:96:16:63:99", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:56:12.651130 containerd[1617]: 2025-12-16 12:56:12.639 [INFO][4527] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7a945b93910ed749ed87d79a25a5a572bcba04072542960e9fdc76e980ac09ce" Namespace="calico-system" Pod="calico-kube-controllers-7b46df5bf6-8vt25" WorkloadEndpoint="ci--4515.1.0--3--ef2be4b8ba-k8s-calico--kube--controllers--7b46df5bf6--8vt25-eth0" Dec 16 12:56:12.677000 audit[4559]: NETFILTER_CFG table=filter:135 family=2 entries=52 op=nft_register_chain pid=4559 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:56:12.677000 audit[4559]: SYSCALL arch=c000003e syscall=46 success=yes exit=24328 a0=3 a1=7ffe8594c7f0 a2=0 a3=7ffe8594c7dc items=0 ppid=3943 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:12.677000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:56:12.679869 containerd[1617]: time="2025-12-16T12:56:12.679816793Z" level=info msg="connecting to shim 7a945b93910ed749ed87d79a25a5a572bcba04072542960e9fdc76e980ac09ce" address="unix:///run/containerd/s/4300622bec6e13f41646f32de3db551de5449740fe59d9ba20b4a5471c461a34" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:56:12.729973 systemd[1]: Started cri-containerd-7a945b93910ed749ed87d79a25a5a572bcba04072542960e9fdc76e980ac09ce.scope - libcontainer container 7a945b93910ed749ed87d79a25a5a572bcba04072542960e9fdc76e980ac09ce. Dec 16 12:56:12.745000 audit: BPF prog-id=234 op=LOAD Dec 16 12:56:12.747000 audit: BPF prog-id=235 op=LOAD Dec 16 12:56:12.747000 audit[4576]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=4564 pid=4576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:12.747000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761393435623933393130656437343965643837643739613235613561 Dec 16 12:56:12.747000 audit: BPF prog-id=235 op=UNLOAD Dec 16 12:56:12.747000 audit[4576]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4564 pid=4576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:12.747000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761393435623933393130656437343965643837643739613235613561 Dec 16 12:56:12.747000 audit: BPF prog-id=236 op=LOAD Dec 16 12:56:12.747000 audit[4576]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=4564 pid=4576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:12.747000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761393435623933393130656437343965643837643739613235613561 Dec 16 12:56:12.747000 audit: BPF prog-id=237 op=LOAD Dec 16 12:56:12.747000 audit[4576]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=4564 pid=4576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:12.747000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761393435623933393130656437343965643837643739613235613561 Dec 16 12:56:12.747000 audit: BPF prog-id=237 op=UNLOAD Dec 16 12:56:12.747000 audit[4576]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4564 pid=4576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:12.747000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761393435623933393130656437343965643837643739613235613561 Dec 16 12:56:12.747000 audit: BPF prog-id=236 op=UNLOAD Dec 16 12:56:12.747000 audit[4576]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4564 pid=4576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:12.747000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761393435623933393130656437343965643837643739613235613561 Dec 16 12:56:12.747000 audit: BPF prog-id=238 op=LOAD Dec 16 12:56:12.747000 audit[4576]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=4564 pid=4576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:12.747000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761393435623933393130656437343965643837643739613235613561 Dec 16 12:56:12.758834 kubelet[2789]: E1216 12:56:12.758518 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hvm4l" podUID="9d74f6fb-d2c4-41ff-9241-88dfaec31538" Dec 16 12:56:12.762100 kubelet[2789]: E1216 12:56:12.762020 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rmjtf" podUID="7b89c039-0754-43bd-ad85-5506dee48dad" Dec 16 12:56:12.763038 kubelet[2789]: E1216 12:56:12.762364 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f6f6f8cd5-qx8fz" podUID="5a9eaa6d-ffc5-496a-b44d-d7e196b6b18c" Dec 16 12:56:12.843740 containerd[1617]: time="2025-12-16T12:56:12.843504583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b46df5bf6-8vt25,Uid:eceb5ead-85dc-4ae8-98b5-b55994dab5ce,Namespace:calico-system,Attempt:0,} returns sandbox id \"7a945b93910ed749ed87d79a25a5a572bcba04072542960e9fdc76e980ac09ce\"" Dec 16 12:56:12.848774 containerd[1617]: time="2025-12-16T12:56:12.848452811Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:56:12.862000 audit[4603]: NETFILTER_CFG table=filter:136 family=2 entries=20 op=nft_register_rule pid=4603 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:56:12.862000 audit[4603]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffdf8104d60 a2=0 a3=7ffdf8104d4c items=0 ppid=2936 pid=4603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:12.862000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:56:12.868000 audit[4603]: NETFILTER_CFG table=nat:137 family=2 entries=14 op=nft_register_rule pid=4603 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:56:12.868000 audit[4603]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffdf8104d60 a2=0 a3=0 items=0 ppid=2936 pid=4603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:12.868000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:56:13.105183 systemd-networkd[1510]: calibfbf8f674ca: Gained IPv6LL Dec 16 12:56:13.173120 containerd[1617]: time="2025-12-16T12:56:13.172864173Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:56:13.174986 containerd[1617]: time="2025-12-16T12:56:13.174841905Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:56:13.175448 containerd[1617]: time="2025-12-16T12:56:13.174893970Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 12:56:13.176373 kubelet[2789]: E1216 12:56:13.176181 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:56:13.176805 kubelet[2789]: E1216 12:56:13.176277 2789 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:56:13.177809 kubelet[2789]: E1216 12:56:13.177710 2789 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qrzn7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7b46df5bf6-8vt25_calico-system(eceb5ead-85dc-4ae8-98b5-b55994dab5ce): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:56:13.179622 kubelet[2789]: E1216 12:56:13.179552 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b46df5bf6-8vt25" podUID="eceb5ead-85dc-4ae8-98b5-b55994dab5ce" Dec 16 12:56:13.417971 kubelet[2789]: E1216 12:56:13.417535 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 12:56:13.420829 containerd[1617]: time="2025-12-16T12:56:13.420360805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-m8248,Uid:17667d96-fec2-4c58-952d-8aee4c298c11,Namespace:kube-system,Attempt:0,}" Dec 16 12:56:13.423732 kubelet[2789]: E1216 12:56:13.423616 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 12:56:13.428987 containerd[1617]: time="2025-12-16T12:56:13.428900731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jvzjz,Uid:564acc2b-9d61-41a8-ac66-0231a2d37863,Namespace:kube-system,Attempt:0,}" Dec 16 12:56:13.690085 systemd-networkd[1510]: calibbb3810ecf2: Link UP Dec 16 12:56:13.698362 systemd-networkd[1510]: calibbb3810ecf2: Gained carrier Dec 16 12:56:13.732933 containerd[1617]: 2025-12-16 12:56:13.541 [INFO][4615] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--3--ef2be4b8ba-k8s-coredns--674b8bbfcf--jvzjz-eth0 coredns-674b8bbfcf- kube-system 564acc2b-9d61-41a8-ac66-0231a2d37863 904 0 2025-12-16 12:55:30 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4515.1.0-3-ef2be4b8ba coredns-674b8bbfcf-jvzjz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibbb3810ecf2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="fa186255bebed32408e1e798e9d564c19159242447c071de5cacbd46ac0728a5" Namespace="kube-system" Pod="coredns-674b8bbfcf-jvzjz" WorkloadEndpoint="ci--4515.1.0--3--ef2be4b8ba-k8s-coredns--674b8bbfcf--jvzjz-" Dec 16 12:56:13.732933 containerd[1617]: 2025-12-16 12:56:13.542 [INFO][4615] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fa186255bebed32408e1e798e9d564c19159242447c071de5cacbd46ac0728a5" Namespace="kube-system" Pod="coredns-674b8bbfcf-jvzjz" WorkloadEndpoint="ci--4515.1.0--3--ef2be4b8ba-k8s-coredns--674b8bbfcf--jvzjz-eth0" Dec 16 12:56:13.732933 containerd[1617]: 2025-12-16 12:56:13.617 [INFO][4630] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fa186255bebed32408e1e798e9d564c19159242447c071de5cacbd46ac0728a5" HandleID="k8s-pod-network.fa186255bebed32408e1e798e9d564c19159242447c071de5cacbd46ac0728a5" Workload="ci--4515.1.0--3--ef2be4b8ba-k8s-coredns--674b8bbfcf--jvzjz-eth0" Dec 16 12:56:13.733452 containerd[1617]: 2025-12-16 12:56:13.618 [INFO][4630] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="fa186255bebed32408e1e798e9d564c19159242447c071de5cacbd46ac0728a5" HandleID="k8s-pod-network.fa186255bebed32408e1e798e9d564c19159242447c071de5cacbd46ac0728a5" Workload="ci--4515.1.0--3--ef2be4b8ba-k8s-coredns--674b8bbfcf--jvzjz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00034a730), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4515.1.0-3-ef2be4b8ba", "pod":"coredns-674b8bbfcf-jvzjz", "timestamp":"2025-12-16 12:56:13.617673687 +0000 UTC"}, Hostname:"ci-4515.1.0-3-ef2be4b8ba", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:56:13.733452 containerd[1617]: 2025-12-16 12:56:13.618 [INFO][4630] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:56:13.733452 containerd[1617]: 2025-12-16 12:56:13.618 [INFO][4630] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:56:13.733452 containerd[1617]: 2025-12-16 12:56:13.618 [INFO][4630] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-3-ef2be4b8ba' Dec 16 12:56:13.733452 containerd[1617]: 2025-12-16 12:56:13.639 [INFO][4630] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fa186255bebed32408e1e798e9d564c19159242447c071de5cacbd46ac0728a5" host="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:56:13.733452 containerd[1617]: 2025-12-16 12:56:13.647 [INFO][4630] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:56:13.733452 containerd[1617]: 2025-12-16 12:56:13.657 [INFO][4630] ipam/ipam.go 511: Trying affinity for 192.168.79.128/26 host="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:56:13.733452 containerd[1617]: 2025-12-16 12:56:13.660 [INFO][4630] ipam/ipam.go 158: Attempting to load block cidr=192.168.79.128/26 host="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:56:13.733452 containerd[1617]: 2025-12-16 12:56:13.663 [INFO][4630] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.79.128/26 host="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:56:13.734565 containerd[1617]: 2025-12-16 12:56:13.663 [INFO][4630] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.79.128/26 handle="k8s-pod-network.fa186255bebed32408e1e798e9d564c19159242447c071de5cacbd46ac0728a5" host="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:56:13.734565 containerd[1617]: 2025-12-16 12:56:13.665 [INFO][4630] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.fa186255bebed32408e1e798e9d564c19159242447c071de5cacbd46ac0728a5 Dec 16 12:56:13.734565 containerd[1617]: 2025-12-16 12:56:13.671 [INFO][4630] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.79.128/26 handle="k8s-pod-network.fa186255bebed32408e1e798e9d564c19159242447c071de5cacbd46ac0728a5" host="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:56:13.734565 containerd[1617]: 2025-12-16 12:56:13.679 [INFO][4630] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.79.135/26] block=192.168.79.128/26 handle="k8s-pod-network.fa186255bebed32408e1e798e9d564c19159242447c071de5cacbd46ac0728a5" host="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:56:13.734565 containerd[1617]: 2025-12-16 12:56:13.679 [INFO][4630] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.79.135/26] handle="k8s-pod-network.fa186255bebed32408e1e798e9d564c19159242447c071de5cacbd46ac0728a5" host="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:56:13.734565 containerd[1617]: 2025-12-16 12:56:13.679 [INFO][4630] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:56:13.734565 containerd[1617]: 2025-12-16 12:56:13.680 [INFO][4630] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.79.135/26] IPv6=[] ContainerID="fa186255bebed32408e1e798e9d564c19159242447c071de5cacbd46ac0728a5" HandleID="k8s-pod-network.fa186255bebed32408e1e798e9d564c19159242447c071de5cacbd46ac0728a5" Workload="ci--4515.1.0--3--ef2be4b8ba-k8s-coredns--674b8bbfcf--jvzjz-eth0" Dec 16 12:56:13.735571 containerd[1617]: 2025-12-16 12:56:13.683 [INFO][4615] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fa186255bebed32408e1e798e9d564c19159242447c071de5cacbd46ac0728a5" Namespace="kube-system" Pod="coredns-674b8bbfcf-jvzjz" WorkloadEndpoint="ci--4515.1.0--3--ef2be4b8ba-k8s-coredns--674b8bbfcf--jvzjz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--3--ef2be4b8ba-k8s-coredns--674b8bbfcf--jvzjz-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"564acc2b-9d61-41a8-ac66-0231a2d37863", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 55, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-3-ef2be4b8ba", ContainerID:"", Pod:"coredns-674b8bbfcf-jvzjz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.79.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibbb3810ecf2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:56:13.735571 containerd[1617]: 2025-12-16 12:56:13.684 [INFO][4615] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.79.135/32] ContainerID="fa186255bebed32408e1e798e9d564c19159242447c071de5cacbd46ac0728a5" Namespace="kube-system" Pod="coredns-674b8bbfcf-jvzjz" WorkloadEndpoint="ci--4515.1.0--3--ef2be4b8ba-k8s-coredns--674b8bbfcf--jvzjz-eth0" Dec 16 12:56:13.735571 containerd[1617]: 2025-12-16 12:56:13.684 [INFO][4615] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibbb3810ecf2 ContainerID="fa186255bebed32408e1e798e9d564c19159242447c071de5cacbd46ac0728a5" Namespace="kube-system" Pod="coredns-674b8bbfcf-jvzjz" WorkloadEndpoint="ci--4515.1.0--3--ef2be4b8ba-k8s-coredns--674b8bbfcf--jvzjz-eth0" Dec 16 12:56:13.735571 containerd[1617]: 2025-12-16 12:56:13.701 [INFO][4615] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fa186255bebed32408e1e798e9d564c19159242447c071de5cacbd46ac0728a5" Namespace="kube-system" Pod="coredns-674b8bbfcf-jvzjz" WorkloadEndpoint="ci--4515.1.0--3--ef2be4b8ba-k8s-coredns--674b8bbfcf--jvzjz-eth0" Dec 16 12:56:13.735571 containerd[1617]: 2025-12-16 12:56:13.703 [INFO][4615] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fa186255bebed32408e1e798e9d564c19159242447c071de5cacbd46ac0728a5" Namespace="kube-system" Pod="coredns-674b8bbfcf-jvzjz" WorkloadEndpoint="ci--4515.1.0--3--ef2be4b8ba-k8s-coredns--674b8bbfcf--jvzjz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--3--ef2be4b8ba-k8s-coredns--674b8bbfcf--jvzjz-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"564acc2b-9d61-41a8-ac66-0231a2d37863", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 55, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-3-ef2be4b8ba", ContainerID:"fa186255bebed32408e1e798e9d564c19159242447c071de5cacbd46ac0728a5", Pod:"coredns-674b8bbfcf-jvzjz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.79.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibbb3810ecf2", MAC:"7e:3d:64:ba:31:73", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:56:13.735571 containerd[1617]: 2025-12-16 12:56:13.730 [INFO][4615] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fa186255bebed32408e1e798e9d564c19159242447c071de5cacbd46ac0728a5" Namespace="kube-system" Pod="coredns-674b8bbfcf-jvzjz" WorkloadEndpoint="ci--4515.1.0--3--ef2be4b8ba-k8s-coredns--674b8bbfcf--jvzjz-eth0" Dec 16 12:56:13.772212 containerd[1617]: time="2025-12-16T12:56:13.772054534Z" level=info msg="connecting to shim fa186255bebed32408e1e798e9d564c19159242447c071de5cacbd46ac0728a5" address="unix:///run/containerd/s/6793414186b3e7f2958f4d87bea9d4f4b0859f4f112882dced881f1e68e79768" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:56:13.779949 kubelet[2789]: E1216 12:56:13.779899 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b46df5bf6-8vt25" podUID="eceb5ead-85dc-4ae8-98b5-b55994dab5ce" Dec 16 12:56:13.781673 kubelet[2789]: E1216 12:56:13.781057 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rmjtf" podUID="7b89c039-0754-43bd-ad85-5506dee48dad" Dec 16 12:56:13.809912 systemd-networkd[1510]: califeb7cd0ebdc: Gained IPv6LL Dec 16 12:56:13.843000 audit[4684]: NETFILTER_CFG table=filter:138 family=2 entries=68 op=nft_register_chain pid=4684 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:56:13.843000 audit[4684]: SYSCALL arch=c000003e syscall=46 success=yes exit=31344 a0=3 a1=7fff094e6700 a2=0 a3=7fff094e66ec items=0 ppid=3943 pid=4684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:13.843000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:56:13.849893 systemd[1]: Started cri-containerd-fa186255bebed32408e1e798e9d564c19159242447c071de5cacbd46ac0728a5.scope - libcontainer container fa186255bebed32408e1e798e9d564c19159242447c071de5cacbd46ac0728a5. Dec 16 12:56:13.874000 audit: BPF prog-id=239 op=LOAD Dec 16 12:56:13.875000 audit: BPF prog-id=240 op=LOAD Dec 16 12:56:13.875000 audit[4672]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=4658 pid=4672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:13.875000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661313836323535626562656433323430386531653739386539643536 Dec 16 12:56:13.875000 audit: BPF prog-id=240 op=UNLOAD Dec 16 12:56:13.875000 audit[4672]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4658 pid=4672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:13.875000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661313836323535626562656433323430386531653739386539643536 Dec 16 12:56:13.875000 audit: BPF prog-id=241 op=LOAD Dec 16 12:56:13.875000 audit[4672]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=4658 pid=4672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:13.875000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661313836323535626562656433323430386531653739386539643536 Dec 16 12:56:13.875000 audit: BPF prog-id=242 op=LOAD Dec 16 12:56:13.875000 audit[4672]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=4658 pid=4672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:13.875000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661313836323535626562656433323430386531653739386539643536 Dec 16 12:56:13.875000 audit: BPF prog-id=242 op=UNLOAD Dec 16 12:56:13.875000 audit[4672]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4658 pid=4672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:13.875000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661313836323535626562656433323430386531653739386539643536 Dec 16 12:56:13.875000 audit: BPF prog-id=241 op=UNLOAD Dec 16 12:56:13.875000 audit[4672]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4658 pid=4672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:13.875000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661313836323535626562656433323430386531653739386539643536 Dec 16 12:56:13.875000 audit: BPF prog-id=243 op=LOAD Dec 16 12:56:13.875000 audit[4672]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=4658 pid=4672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:13.875000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661313836323535626562656433323430386531653739386539643536 Dec 16 12:56:13.904278 systemd-networkd[1510]: cali1aac80a85bf: Link UP Dec 16 12:56:13.907059 systemd-networkd[1510]: cali1aac80a85bf: Gained carrier Dec 16 12:56:13.932527 containerd[1617]: 2025-12-16 12:56:13.541 [INFO][4604] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--3--ef2be4b8ba-k8s-coredns--674b8bbfcf--m8248-eth0 coredns-674b8bbfcf- kube-system 17667d96-fec2-4c58-952d-8aee4c298c11 913 0 2025-12-16 12:55:30 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4515.1.0-3-ef2be4b8ba coredns-674b8bbfcf-m8248 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1aac80a85bf [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="a08dbfb690c890be0feed07bd85273c146592644c6fda3252e7f9e37fc7a827d" Namespace="kube-system" Pod="coredns-674b8bbfcf-m8248" WorkloadEndpoint="ci--4515.1.0--3--ef2be4b8ba-k8s-coredns--674b8bbfcf--m8248-" Dec 16 12:56:13.932527 containerd[1617]: 2025-12-16 12:56:13.542 [INFO][4604] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a08dbfb690c890be0feed07bd85273c146592644c6fda3252e7f9e37fc7a827d" Namespace="kube-system" Pod="coredns-674b8bbfcf-m8248" WorkloadEndpoint="ci--4515.1.0--3--ef2be4b8ba-k8s-coredns--674b8bbfcf--m8248-eth0" Dec 16 12:56:13.932527 containerd[1617]: 2025-12-16 12:56:13.633 [INFO][4632] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a08dbfb690c890be0feed07bd85273c146592644c6fda3252e7f9e37fc7a827d" HandleID="k8s-pod-network.a08dbfb690c890be0feed07bd85273c146592644c6fda3252e7f9e37fc7a827d" Workload="ci--4515.1.0--3--ef2be4b8ba-k8s-coredns--674b8bbfcf--m8248-eth0" Dec 16 12:56:13.932527 containerd[1617]: 2025-12-16 12:56:13.633 [INFO][4632] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a08dbfb690c890be0feed07bd85273c146592644c6fda3252e7f9e37fc7a827d" HandleID="k8s-pod-network.a08dbfb690c890be0feed07bd85273c146592644c6fda3252e7f9e37fc7a827d" Workload="ci--4515.1.0--3--ef2be4b8ba-k8s-coredns--674b8bbfcf--m8248-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001004f0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4515.1.0-3-ef2be4b8ba", "pod":"coredns-674b8bbfcf-m8248", "timestamp":"2025-12-16 12:56:13.633106562 +0000 UTC"}, Hostname:"ci-4515.1.0-3-ef2be4b8ba", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:56:13.932527 containerd[1617]: 2025-12-16 12:56:13.633 [INFO][4632] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:56:13.932527 containerd[1617]: 2025-12-16 12:56:13.679 [INFO][4632] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:56:13.932527 containerd[1617]: 2025-12-16 12:56:13.680 [INFO][4632] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-3-ef2be4b8ba' Dec 16 12:56:13.932527 containerd[1617]: 2025-12-16 12:56:13.741 [INFO][4632] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a08dbfb690c890be0feed07bd85273c146592644c6fda3252e7f9e37fc7a827d" host="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:56:13.932527 containerd[1617]: 2025-12-16 12:56:13.751 [INFO][4632] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:56:13.932527 containerd[1617]: 2025-12-16 12:56:13.778 [INFO][4632] ipam/ipam.go 511: Trying affinity for 192.168.79.128/26 host="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:56:13.932527 containerd[1617]: 2025-12-16 12:56:13.805 [INFO][4632] ipam/ipam.go 158: Attempting to load block cidr=192.168.79.128/26 host="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:56:13.932527 containerd[1617]: 2025-12-16 12:56:13.815 [INFO][4632] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.79.128/26 host="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:56:13.932527 containerd[1617]: 2025-12-16 12:56:13.816 [INFO][4632] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.79.128/26 handle="k8s-pod-network.a08dbfb690c890be0feed07bd85273c146592644c6fda3252e7f9e37fc7a827d" host="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:56:13.932527 containerd[1617]: 2025-12-16 12:56:13.820 [INFO][4632] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a08dbfb690c890be0feed07bd85273c146592644c6fda3252e7f9e37fc7a827d Dec 16 12:56:13.932527 containerd[1617]: 2025-12-16 12:56:13.855 [INFO][4632] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.79.128/26 handle="k8s-pod-network.a08dbfb690c890be0feed07bd85273c146592644c6fda3252e7f9e37fc7a827d" host="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:56:13.932527 containerd[1617]: 2025-12-16 12:56:13.886 [INFO][4632] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.79.136/26] block=192.168.79.128/26 handle="k8s-pod-network.a08dbfb690c890be0feed07bd85273c146592644c6fda3252e7f9e37fc7a827d" host="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:56:13.932527 containerd[1617]: 2025-12-16 12:56:13.886 [INFO][4632] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.79.136/26] handle="k8s-pod-network.a08dbfb690c890be0feed07bd85273c146592644c6fda3252e7f9e37fc7a827d" host="ci-4515.1.0-3-ef2be4b8ba" Dec 16 12:56:13.932527 containerd[1617]: 2025-12-16 12:56:13.887 [INFO][4632] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:56:13.932527 containerd[1617]: 2025-12-16 12:56:13.887 [INFO][4632] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.79.136/26] IPv6=[] ContainerID="a08dbfb690c890be0feed07bd85273c146592644c6fda3252e7f9e37fc7a827d" HandleID="k8s-pod-network.a08dbfb690c890be0feed07bd85273c146592644c6fda3252e7f9e37fc7a827d" Workload="ci--4515.1.0--3--ef2be4b8ba-k8s-coredns--674b8bbfcf--m8248-eth0" Dec 16 12:56:13.933336 containerd[1617]: 2025-12-16 12:56:13.893 [INFO][4604] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a08dbfb690c890be0feed07bd85273c146592644c6fda3252e7f9e37fc7a827d" Namespace="kube-system" Pod="coredns-674b8bbfcf-m8248" WorkloadEndpoint="ci--4515.1.0--3--ef2be4b8ba-k8s-coredns--674b8bbfcf--m8248-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--3--ef2be4b8ba-k8s-coredns--674b8bbfcf--m8248-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"17667d96-fec2-4c58-952d-8aee4c298c11", ResourceVersion:"913", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 55, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-3-ef2be4b8ba", ContainerID:"", Pod:"coredns-674b8bbfcf-m8248", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.79.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1aac80a85bf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:56:13.933336 containerd[1617]: 2025-12-16 12:56:13.893 [INFO][4604] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.79.136/32] ContainerID="a08dbfb690c890be0feed07bd85273c146592644c6fda3252e7f9e37fc7a827d" Namespace="kube-system" Pod="coredns-674b8bbfcf-m8248" WorkloadEndpoint="ci--4515.1.0--3--ef2be4b8ba-k8s-coredns--674b8bbfcf--m8248-eth0" Dec 16 12:56:13.933336 containerd[1617]: 2025-12-16 12:56:13.893 [INFO][4604] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1aac80a85bf ContainerID="a08dbfb690c890be0feed07bd85273c146592644c6fda3252e7f9e37fc7a827d" Namespace="kube-system" Pod="coredns-674b8bbfcf-m8248" WorkloadEndpoint="ci--4515.1.0--3--ef2be4b8ba-k8s-coredns--674b8bbfcf--m8248-eth0" Dec 16 12:56:13.933336 containerd[1617]: 2025-12-16 12:56:13.915 [INFO][4604] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a08dbfb690c890be0feed07bd85273c146592644c6fda3252e7f9e37fc7a827d" Namespace="kube-system" Pod="coredns-674b8bbfcf-m8248" WorkloadEndpoint="ci--4515.1.0--3--ef2be4b8ba-k8s-coredns--674b8bbfcf--m8248-eth0" Dec 16 12:56:13.933336 containerd[1617]: 2025-12-16 12:56:13.915 [INFO][4604] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a08dbfb690c890be0feed07bd85273c146592644c6fda3252e7f9e37fc7a827d" Namespace="kube-system" Pod="coredns-674b8bbfcf-m8248" WorkloadEndpoint="ci--4515.1.0--3--ef2be4b8ba-k8s-coredns--674b8bbfcf--m8248-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--3--ef2be4b8ba-k8s-coredns--674b8bbfcf--m8248-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"17667d96-fec2-4c58-952d-8aee4c298c11", ResourceVersion:"913", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 55, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-3-ef2be4b8ba", ContainerID:"a08dbfb690c890be0feed07bd85273c146592644c6fda3252e7f9e37fc7a827d", Pod:"coredns-674b8bbfcf-m8248", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.79.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1aac80a85bf", MAC:"32:24:8f:cd:6b:22", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:56:13.933336 containerd[1617]: 2025-12-16 12:56:13.928 [INFO][4604] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a08dbfb690c890be0feed07bd85273c146592644c6fda3252e7f9e37fc7a827d" Namespace="kube-system" Pod="coredns-674b8bbfcf-m8248" WorkloadEndpoint="ci--4515.1.0--3--ef2be4b8ba-k8s-coredns--674b8bbfcf--m8248-eth0" Dec 16 12:56:14.002682 containerd[1617]: time="2025-12-16T12:56:14.002002605Z" level=info msg="connecting to shim a08dbfb690c890be0feed07bd85273c146592644c6fda3252e7f9e37fc7a827d" address="unix:///run/containerd/s/22b420623781fbb8d05c0539a667759860a10d6631dbdf561e782d8927bf13ac" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:56:14.027665 containerd[1617]: time="2025-12-16T12:56:14.027542556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jvzjz,Uid:564acc2b-9d61-41a8-ac66-0231a2d37863,Namespace:kube-system,Attempt:0,} returns sandbox id \"fa186255bebed32408e1e798e9d564c19159242447c071de5cacbd46ac0728a5\"" Dec 16 12:56:14.030286 kubelet[2789]: E1216 12:56:14.030241 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 12:56:14.055000 audit[4735]: NETFILTER_CFG table=filter:139 family=2 entries=58 op=nft_register_chain pid=4735 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:56:14.055000 audit[4735]: SYSCALL arch=c000003e syscall=46 success=yes exit=26744 a0=3 a1=7fff987696d0 a2=0 a3=7fff987696bc items=0 ppid=3943 pid=4735 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:14.055000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:56:14.081999 systemd[1]: Started cri-containerd-a08dbfb690c890be0feed07bd85273c146592644c6fda3252e7f9e37fc7a827d.scope - libcontainer container a08dbfb690c890be0feed07bd85273c146592644c6fda3252e7f9e37fc7a827d. Dec 16 12:56:14.103666 containerd[1617]: time="2025-12-16T12:56:14.103038854Z" level=info msg="CreateContainer within sandbox \"fa186255bebed32408e1e798e9d564c19159242447c071de5cacbd46ac0728a5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 12:56:14.121314 containerd[1617]: time="2025-12-16T12:56:14.121275531Z" level=info msg="Container d820dda8e22ab279d5ac01dee2e45359f1895904b826139e4bceee72cbe507d0: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:56:14.137503 containerd[1617]: time="2025-12-16T12:56:14.136923466Z" level=info msg="CreateContainer within sandbox \"fa186255bebed32408e1e798e9d564c19159242447c071de5cacbd46ac0728a5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d820dda8e22ab279d5ac01dee2e45359f1895904b826139e4bceee72cbe507d0\"" Dec 16 12:56:14.135000 audit: BPF prog-id=244 op=LOAD Dec 16 12:56:14.139657 containerd[1617]: time="2025-12-16T12:56:14.139571273Z" level=info msg="StartContainer for \"d820dda8e22ab279d5ac01dee2e45359f1895904b826139e4bceee72cbe507d0\"" Dec 16 12:56:14.139000 audit: BPF prog-id=245 op=LOAD Dec 16 12:56:14.139000 audit[4734]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=4721 pid=4734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:14.139000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130386462666236393063383930626530666565643037626438353237 Dec 16 12:56:14.140000 audit: BPF prog-id=245 op=UNLOAD Dec 16 12:56:14.140000 audit[4734]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4721 pid=4734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:14.140000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130386462666236393063383930626530666565643037626438353237 Dec 16 12:56:14.143000 audit: BPF prog-id=246 op=LOAD Dec 16 12:56:14.143000 audit[4734]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=4721 pid=4734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:14.143000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130386462666236393063383930626530666565643037626438353237 Dec 16 12:56:14.143000 audit: BPF prog-id=247 op=LOAD Dec 16 12:56:14.143000 audit[4734]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=4721 pid=4734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:14.143000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130386462666236393063383930626530666565643037626438353237 Dec 16 12:56:14.143000 audit: BPF prog-id=247 op=UNLOAD Dec 16 12:56:14.143000 audit[4734]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4721 pid=4734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:14.143000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130386462666236393063383930626530666565643037626438353237 Dec 16 12:56:14.143000 audit: BPF prog-id=246 op=UNLOAD Dec 16 12:56:14.143000 audit[4734]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4721 pid=4734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:14.143000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130386462666236393063383930626530666565643037626438353237 Dec 16 12:56:14.143000 audit: BPF prog-id=248 op=LOAD Dec 16 12:56:14.143000 audit[4734]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=4721 pid=4734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:14.143000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130386462666236393063383930626530666565643037626438353237 Dec 16 12:56:14.148973 containerd[1617]: time="2025-12-16T12:56:14.147622523Z" level=info msg="connecting to shim d820dda8e22ab279d5ac01dee2e45359f1895904b826139e4bceee72cbe507d0" address="unix:///run/containerd/s/6793414186b3e7f2958f4d87bea9d4f4b0859f4f112882dced881f1e68e79768" protocol=ttrpc version=3 Dec 16 12:56:14.184890 systemd[1]: Started cri-containerd-d820dda8e22ab279d5ac01dee2e45359f1895904b826139e4bceee72cbe507d0.scope - libcontainer container d820dda8e22ab279d5ac01dee2e45359f1895904b826139e4bceee72cbe507d0. Dec 16 12:56:14.229000 audit: BPF prog-id=249 op=LOAD Dec 16 12:56:14.231000 audit: BPF prog-id=250 op=LOAD Dec 16 12:56:14.231000 audit[4755]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=4658 pid=4755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:14.231000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438323064646138653232616232373964356163303164656532653435 Dec 16 12:56:14.231000 audit: BPF prog-id=250 op=UNLOAD Dec 16 12:56:14.231000 audit[4755]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4658 pid=4755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:14.231000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438323064646138653232616232373964356163303164656532653435 Dec 16 12:56:14.232000 audit: BPF prog-id=251 op=LOAD Dec 16 12:56:14.232000 audit[4755]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=4658 pid=4755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:14.232000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438323064646138653232616232373964356163303164656532653435 Dec 16 12:56:14.232000 audit: BPF prog-id=252 op=LOAD Dec 16 12:56:14.232000 audit[4755]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=4658 pid=4755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:14.232000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438323064646138653232616232373964356163303164656532653435 Dec 16 12:56:14.233000 audit: BPF prog-id=252 op=UNLOAD Dec 16 12:56:14.233000 audit[4755]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4658 pid=4755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:14.233000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438323064646138653232616232373964356163303164656532653435 Dec 16 12:56:14.233000 audit: BPF prog-id=251 op=UNLOAD Dec 16 12:56:14.233000 audit[4755]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4658 pid=4755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:14.233000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438323064646138653232616232373964356163303164656532653435 Dec 16 12:56:14.233000 audit: BPF prog-id=253 op=LOAD Dec 16 12:56:14.233000 audit[4755]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=4658 pid=4755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:14.233000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438323064646138653232616232373964356163303164656532653435 Dec 16 12:56:14.246046 containerd[1617]: time="2025-12-16T12:56:14.245434598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-m8248,Uid:17667d96-fec2-4c58-952d-8aee4c298c11,Namespace:kube-system,Attempt:0,} returns sandbox id \"a08dbfb690c890be0feed07bd85273c146592644c6fda3252e7f9e37fc7a827d\"" Dec 16 12:56:14.248493 kubelet[2789]: E1216 12:56:14.248164 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 12:56:14.255071 containerd[1617]: time="2025-12-16T12:56:14.254948337Z" level=info msg="CreateContainer within sandbox \"a08dbfb690c890be0feed07bd85273c146592644c6fda3252e7f9e37fc7a827d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 12:56:14.272693 containerd[1617]: time="2025-12-16T12:56:14.272032576Z" level=info msg="Container 02e09f8249ad22401ad2747cafae12f86fc4c07602d234ad7fa0f3e387b88346: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:56:14.287541 containerd[1617]: time="2025-12-16T12:56:14.287461792Z" level=info msg="CreateContainer within sandbox \"a08dbfb690c890be0feed07bd85273c146592644c6fda3252e7f9e37fc7a827d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"02e09f8249ad22401ad2747cafae12f86fc4c07602d234ad7fa0f3e387b88346\"" Dec 16 12:56:14.292399 containerd[1617]: time="2025-12-16T12:56:14.292258843Z" level=info msg="StartContainer for \"02e09f8249ad22401ad2747cafae12f86fc4c07602d234ad7fa0f3e387b88346\"" Dec 16 12:56:14.298656 containerd[1617]: time="2025-12-16T12:56:14.298577613Z" level=info msg="connecting to shim 02e09f8249ad22401ad2747cafae12f86fc4c07602d234ad7fa0f3e387b88346" address="unix:///run/containerd/s/22b420623781fbb8d05c0539a667759860a10d6631dbdf561e782d8927bf13ac" protocol=ttrpc version=3 Dec 16 12:56:14.299668 containerd[1617]: time="2025-12-16T12:56:14.299561456Z" level=info msg="StartContainer for \"d820dda8e22ab279d5ac01dee2e45359f1895904b826139e4bceee72cbe507d0\" returns successfully" Dec 16 12:56:14.349120 systemd[1]: Started cri-containerd-02e09f8249ad22401ad2747cafae12f86fc4c07602d234ad7fa0f3e387b88346.scope - libcontainer container 02e09f8249ad22401ad2747cafae12f86fc4c07602d234ad7fa0f3e387b88346. Dec 16 12:56:14.369000 audit: BPF prog-id=254 op=LOAD Dec 16 12:56:14.370000 audit: BPF prog-id=255 op=LOAD Dec 16 12:56:14.370000 audit[4791]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4721 pid=4791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:14.370000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032653039663832343961643232343031616432373437636166616531 Dec 16 12:56:14.370000 audit: BPF prog-id=255 op=UNLOAD Dec 16 12:56:14.370000 audit[4791]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4721 pid=4791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:14.370000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032653039663832343961643232343031616432373437636166616531 Dec 16 12:56:14.371000 audit: BPF prog-id=256 op=LOAD Dec 16 12:56:14.371000 audit[4791]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4721 pid=4791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:14.371000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032653039663832343961643232343031616432373437636166616531 Dec 16 12:56:14.371000 audit: BPF prog-id=257 op=LOAD Dec 16 12:56:14.371000 audit[4791]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4721 pid=4791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:14.371000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032653039663832343961643232343031616432373437636166616531 Dec 16 12:56:14.371000 audit: BPF prog-id=257 op=UNLOAD Dec 16 12:56:14.371000 audit[4791]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4721 pid=4791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:14.371000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032653039663832343961643232343031616432373437636166616531 Dec 16 12:56:14.371000 audit: BPF prog-id=256 op=UNLOAD Dec 16 12:56:14.371000 audit[4791]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4721 pid=4791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:14.371000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032653039663832343961643232343031616432373437636166616531 Dec 16 12:56:14.371000 audit: BPF prog-id=258 op=LOAD Dec 16 12:56:14.371000 audit[4791]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4721 pid=4791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:14.371000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032653039663832343961643232343031616432373437636166616531 Dec 16 12:56:14.475256 containerd[1617]: time="2025-12-16T12:56:14.475107129Z" level=info msg="StartContainer for \"02e09f8249ad22401ad2747cafae12f86fc4c07602d234ad7fa0f3e387b88346\" returns successfully" Dec 16 12:56:14.763640 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3268389959.mount: Deactivated successfully. Dec 16 12:56:14.784243 kubelet[2789]: E1216 12:56:14.783915 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 12:56:14.790782 kubelet[2789]: E1216 12:56:14.790524 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 12:56:14.792624 kubelet[2789]: E1216 12:56:14.792574 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b46df5bf6-8vt25" podUID="eceb5ead-85dc-4ae8-98b5-b55994dab5ce" Dec 16 12:56:14.893281 kubelet[2789]: I1216 12:56:14.881913 2789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-jvzjz" podStartSLOduration=44.881891997 podStartE2EDuration="44.881891997s" podCreationTimestamp="2025-12-16 12:55:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:56:14.825804483 +0000 UTC m=+48.651273907" watchObservedRunningTime="2025-12-16 12:56:14.881891997 +0000 UTC m=+48.707361410" Dec 16 12:56:14.927533 kubelet[2789]: I1216 12:56:14.927139 2789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-m8248" podStartSLOduration=44.927115985 podStartE2EDuration="44.927115985s" podCreationTimestamp="2025-12-16 12:55:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:56:14.925295808 +0000 UTC m=+48.750765226" watchObservedRunningTime="2025-12-16 12:56:14.927115985 +0000 UTC m=+48.752585396" Dec 16 12:56:14.958000 audit[4831]: NETFILTER_CFG table=filter:140 family=2 entries=20 op=nft_register_rule pid=4831 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:56:14.958000 audit[4831]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffdd3e36660 a2=0 a3=7ffdd3e3664c items=0 ppid=2936 pid=4831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:14.958000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:56:14.962000 audit[4831]: NETFILTER_CFG table=nat:141 family=2 entries=14 op=nft_register_rule pid=4831 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:56:14.962000 audit[4831]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffdd3e36660 a2=0 a3=0 items=0 ppid=2936 pid=4831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:14.962000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:56:15.001000 audit[4833]: NETFILTER_CFG table=filter:142 family=2 entries=17 op=nft_register_rule pid=4833 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:56:15.001000 audit[4833]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc87655e90 a2=0 a3=7ffc87655e7c items=0 ppid=2936 pid=4833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:15.001000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:56:15.025000 audit[4833]: NETFILTER_CFG table=nat:143 family=2 entries=47 op=nft_register_chain pid=4833 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:56:15.025000 audit[4833]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffc87655e90 a2=0 a3=7ffc87655e7c items=0 ppid=2936 pid=4833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:15.025000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:56:15.217005 systemd-networkd[1510]: cali1aac80a85bf: Gained IPv6LL Dec 16 12:56:15.472836 systemd-networkd[1510]: calibbb3810ecf2: Gained IPv6LL Dec 16 12:56:15.793473 kubelet[2789]: E1216 12:56:15.793257 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 12:56:15.793473 kubelet[2789]: E1216 12:56:15.793260 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 12:56:16.797901 kubelet[2789]: E1216 12:56:16.797857 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 12:56:16.798694 kubelet[2789]: E1216 12:56:16.798559 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 12:56:20.437993 systemd[1]: Started sshd@7-164.90.155.252:22-147.75.109.163:40160.service - OpenSSH per-connection server daemon (147.75.109.163:40160). Dec 16 12:56:20.444654 kernel: kauditd_printk_skb: 214 callbacks suppressed Dec 16 12:56:20.444754 kernel: audit: type=1130 audit(1765889780.436:750): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-164.90.155.252:22-147.75.109.163:40160 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:20.436000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-164.90.155.252:22-147.75.109.163:40160 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:20.602740 kernel: audit: type=1101 audit(1765889780.595:751): pid=4847 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:20.595000 audit[4847]: USER_ACCT pid=4847 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:20.601048 sshd-session[4847]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:56:20.604721 sshd[4847]: Accepted publickey for core from 147.75.109.163 port 40160 ssh2: RSA SHA256:LU1qZA7a/A5pU4aT9e5vsRy+gIvYDHbmZDPRbtPLDD8 Dec 16 12:56:20.597000 audit[4847]: CRED_ACQ pid=4847 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:20.610361 kernel: audit: type=1103 audit(1765889780.597:752): pid=4847 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:20.611728 systemd-logind[1584]: New session 8 of user core. Dec 16 12:56:20.615719 kernel: audit: type=1006 audit(1765889780.597:753): pid=4847 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=8 res=1 Dec 16 12:56:20.597000 audit[4847]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff12829c60 a2=3 a3=0 items=0 ppid=1 pid=4847 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:20.621653 kernel: audit: type=1300 audit(1765889780.597:753): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff12829c60 a2=3 a3=0 items=0 ppid=1 pid=4847 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:20.618603 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 16 12:56:20.597000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:56:20.629755 kernel: audit: type=1327 audit(1765889780.597:753): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:56:20.622000 audit[4847]: USER_START pid=4847 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:20.636755 kernel: audit: type=1105 audit(1765889780.622:754): pid=4847 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:20.634000 audit[4850]: CRED_ACQ pid=4850 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:20.642687 kernel: audit: type=1103 audit(1765889780.634:755): pid=4850 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:21.495065 sshd[4850]: Connection closed by 147.75.109.163 port 40160 Dec 16 12:56:21.497280 sshd-session[4847]: pam_unix(sshd:session): session closed for user core Dec 16 12:56:21.502000 audit[4847]: USER_END pid=4847 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:21.511106 kernel: audit: type=1106 audit(1765889781.502:756): pid=4847 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:21.512504 systemd[1]: sshd@7-164.90.155.252:22-147.75.109.163:40160.service: Deactivated successfully. Dec 16 12:56:21.520134 kernel: audit: type=1104 audit(1765889781.502:757): pid=4847 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:21.502000 audit[4847]: CRED_DISP pid=4847 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:21.521700 systemd[1]: session-8.scope: Deactivated successfully. Dec 16 12:56:21.523916 systemd-logind[1584]: Session 8 logged out. Waiting for processes to exit. Dec 16 12:56:21.511000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-164.90.155.252:22-147.75.109.163:40160 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:21.528020 systemd-logind[1584]: Removed session 8. Dec 16 12:56:22.423670 containerd[1617]: time="2025-12-16T12:56:22.423335641Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:56:22.842278 containerd[1617]: time="2025-12-16T12:56:22.841976181Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:56:22.843224 containerd[1617]: time="2025-12-16T12:56:22.843078505Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:56:22.843406 containerd[1617]: time="2025-12-16T12:56:22.843212305Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 12:56:22.843916 kubelet[2789]: E1216 12:56:22.843799 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:56:22.843916 kubelet[2789]: E1216 12:56:22.843871 2789 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:56:22.844969 kubelet[2789]: E1216 12:56:22.844607 2789 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:b2c13645dac8477e847d7b1cce258193,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hqmlr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-675645944b-fkqz9_calico-system(dd978ccd-4987-4640-957a-11962c9801ea): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:56:22.847694 containerd[1617]: time="2025-12-16T12:56:22.847415674Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:56:23.182039 containerd[1617]: time="2025-12-16T12:56:23.181498782Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:56:23.183251 containerd[1617]: time="2025-12-16T12:56:23.183081022Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:56:23.183251 containerd[1617]: time="2025-12-16T12:56:23.183210069Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 12:56:23.183642 kubelet[2789]: E1216 12:56:23.183564 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:56:23.183791 kubelet[2789]: E1216 12:56:23.183755 2789 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:56:23.184658 kubelet[2789]: E1216 12:56:23.184437 2789 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hqmlr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-675645944b-fkqz9_calico-system(dd978ccd-4987-4640-957a-11962c9801ea): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:56:23.186129 kubelet[2789]: E1216 12:56:23.186066 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-675645944b-fkqz9" podUID="dd978ccd-4987-4640-957a-11962c9801ea" Dec 16 12:56:23.420650 containerd[1617]: time="2025-12-16T12:56:23.420586275Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:56:23.703177 containerd[1617]: time="2025-12-16T12:56:23.702926746Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:56:23.706180 containerd[1617]: time="2025-12-16T12:56:23.706127046Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:56:23.706588 containerd[1617]: time="2025-12-16T12:56:23.706233129Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:56:23.706659 kubelet[2789]: E1216 12:56:23.706407 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:56:23.706659 kubelet[2789]: E1216 12:56:23.706469 2789 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:56:23.707280 kubelet[2789]: E1216 12:56:23.707213 2789 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l6wnn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5f6f6f8cd5-nxqwt_calico-apiserver(4d5fd089-d56c-460c-b006-cc36a126ec32): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:56:23.708850 kubelet[2789]: E1216 12:56:23.708794 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f6f6f8cd5-nxqwt" podUID="4d5fd089-d56c-460c-b006-cc36a126ec32" Dec 16 12:56:25.419895 containerd[1617]: time="2025-12-16T12:56:25.419537350Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:56:25.745307 containerd[1617]: time="2025-12-16T12:56:25.745248150Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:56:25.746147 containerd[1617]: time="2025-12-16T12:56:25.746088073Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:56:25.746442 containerd[1617]: time="2025-12-16T12:56:25.746223637Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 12:56:25.747069 kubelet[2789]: E1216 12:56:25.746500 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:56:25.747069 kubelet[2789]: E1216 12:56:25.746576 2789 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:56:25.747069 kubelet[2789]: E1216 12:56:25.746995 2789 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lwc8p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rmjtf_calico-system(7b89c039-0754-43bd-ad85-5506dee48dad): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:56:25.748089 containerd[1617]: time="2025-12-16T12:56:25.748062460Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:56:26.094746 containerd[1617]: time="2025-12-16T12:56:26.093533577Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:56:26.096484 containerd[1617]: time="2025-12-16T12:56:26.095719395Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:56:26.096484 containerd[1617]: time="2025-12-16T12:56:26.095861245Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 12:56:26.096990 kubelet[2789]: E1216 12:56:26.096948 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:56:26.097254 kubelet[2789]: E1216 12:56:26.097104 2789 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:56:26.098154 containerd[1617]: time="2025-12-16T12:56:26.098090628Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:56:26.098556 kubelet[2789]: E1216 12:56:26.097928 2789 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vwjlh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-hvm4l_calico-system(9d74f6fb-d2c4-41ff-9241-88dfaec31538): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:56:26.100078 kubelet[2789]: E1216 12:56:26.099999 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hvm4l" podUID="9d74f6fb-d2c4-41ff-9241-88dfaec31538" Dec 16 12:56:26.512000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-164.90.155.252:22-147.75.109.163:49058 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:26.516046 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:56:26.516106 kernel: audit: type=1130 audit(1765889786.512:759): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-164.90.155.252:22-147.75.109.163:49058 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:26.513965 systemd[1]: Started sshd@8-164.90.155.252:22-147.75.109.163:49058.service - OpenSSH per-connection server daemon (147.75.109.163:49058). Dec 16 12:56:26.552103 containerd[1617]: time="2025-12-16T12:56:26.552050755Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:56:26.553833 containerd[1617]: time="2025-12-16T12:56:26.553699563Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:56:26.554030 containerd[1617]: time="2025-12-16T12:56:26.553946733Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 12:56:26.554613 kubelet[2789]: E1216 12:56:26.554529 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:56:26.554613 kubelet[2789]: E1216 12:56:26.554603 2789 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:56:26.554925 kubelet[2789]: E1216 12:56:26.554821 2789 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lwc8p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rmjtf_calico-system(7b89c039-0754-43bd-ad85-5506dee48dad): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:56:26.557413 kubelet[2789]: E1216 12:56:26.557358 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rmjtf" podUID="7b89c039-0754-43bd-ad85-5506dee48dad" Dec 16 12:56:26.637000 audit[4868]: USER_ACCT pid=4868 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:26.641045 sshd[4868]: Accepted publickey for core from 147.75.109.163 port 49058 ssh2: RSA SHA256:LU1qZA7a/A5pU4aT9e5vsRy+gIvYDHbmZDPRbtPLDD8 Dec 16 12:56:26.639000 audit[4868]: CRED_ACQ pid=4868 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:26.643593 sshd-session[4868]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:56:26.644626 kernel: audit: type=1101 audit(1765889786.637:760): pid=4868 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:26.644981 kernel: audit: type=1103 audit(1765889786.639:761): pid=4868 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:26.651679 kernel: audit: type=1006 audit(1765889786.639:762): pid=4868 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Dec 16 12:56:26.639000 audit[4868]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffea46453a0 a2=3 a3=0 items=0 ppid=1 pid=4868 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:26.664659 kernel: audit: type=1300 audit(1765889786.639:762): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffea46453a0 a2=3 a3=0 items=0 ppid=1 pid=4868 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:26.664750 systemd-logind[1584]: New session 9 of user core. Dec 16 12:56:26.639000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:56:26.668656 kernel: audit: type=1327 audit(1765889786.639:762): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:56:26.669934 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 16 12:56:26.673000 audit[4868]: USER_START pid=4868 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:26.680768 kernel: audit: type=1105 audit(1765889786.673:763): pid=4868 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:26.676000 audit[4871]: CRED_ACQ pid=4871 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:26.694666 kernel: audit: type=1103 audit(1765889786.676:764): pid=4871 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:26.878248 sshd[4871]: Connection closed by 147.75.109.163 port 49058 Dec 16 12:56:26.880068 sshd-session[4868]: pam_unix(sshd:session): session closed for user core Dec 16 12:56:26.884000 audit[4868]: USER_END pid=4868 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:26.892769 kernel: audit: type=1106 audit(1765889786.884:765): pid=4868 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:26.890000 audit[4868]: CRED_DISP pid=4868 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:26.894969 systemd[1]: sshd@8-164.90.155.252:22-147.75.109.163:49058.service: Deactivated successfully. Dec 16 12:56:26.899669 kernel: audit: type=1104 audit(1765889786.890:766): pid=4868 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:26.894000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-164.90.155.252:22-147.75.109.163:49058 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:26.900068 systemd[1]: session-9.scope: Deactivated successfully. Dec 16 12:56:26.906245 systemd-logind[1584]: Session 9 logged out. Waiting for processes to exit. Dec 16 12:56:26.907748 systemd-logind[1584]: Removed session 9. Dec 16 12:56:27.421413 containerd[1617]: time="2025-12-16T12:56:27.419936730Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:56:27.780979 containerd[1617]: time="2025-12-16T12:56:27.780929473Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:56:27.782163 containerd[1617]: time="2025-12-16T12:56:27.782060929Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:56:27.782163 containerd[1617]: time="2025-12-16T12:56:27.782123297Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:56:27.782732 kubelet[2789]: E1216 12:56:27.782667 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:56:27.783110 kubelet[2789]: E1216 12:56:27.782942 2789 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:56:27.783744 kubelet[2789]: E1216 12:56:27.783665 2789 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-plx6k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5f6f6f8cd5-qx8fz_calico-apiserver(5a9eaa6d-ffc5-496a-b44d-d7e196b6b18c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:56:27.784928 kubelet[2789]: E1216 12:56:27.784880 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f6f6f8cd5-qx8fz" podUID="5a9eaa6d-ffc5-496a-b44d-d7e196b6b18c" Dec 16 12:56:29.419618 containerd[1617]: time="2025-12-16T12:56:29.419165880Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:56:29.714395 containerd[1617]: time="2025-12-16T12:56:29.714182013Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:56:29.715145 containerd[1617]: time="2025-12-16T12:56:29.715021099Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:56:29.715145 containerd[1617]: time="2025-12-16T12:56:29.715111367Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 12:56:29.715355 kubelet[2789]: E1216 12:56:29.715300 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:56:29.715988 kubelet[2789]: E1216 12:56:29.715365 2789 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:56:29.715988 kubelet[2789]: E1216 12:56:29.715530 2789 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qrzn7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7b46df5bf6-8vt25_calico-system(eceb5ead-85dc-4ae8-98b5-b55994dab5ce): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:56:29.717156 kubelet[2789]: E1216 12:56:29.716738 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b46df5bf6-8vt25" podUID="eceb5ead-85dc-4ae8-98b5-b55994dab5ce" Dec 16 12:56:31.902648 systemd[1]: Started sshd@9-164.90.155.252:22-147.75.109.163:49068.service - OpenSSH per-connection server daemon (147.75.109.163:49068). Dec 16 12:56:31.909669 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:56:31.909864 kernel: audit: type=1130 audit(1765889791.902:768): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-164.90.155.252:22-147.75.109.163:49068 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:31.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-164.90.155.252:22-147.75.109.163:49068 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:32.118000 audit[4892]: USER_ACCT pid=4892 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:32.125672 kernel: audit: type=1101 audit(1765889792.118:769): pid=4892 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:32.127690 sshd[4892]: Accepted publickey for core from 147.75.109.163 port 49068 ssh2: RSA SHA256:LU1qZA7a/A5pU4aT9e5vsRy+gIvYDHbmZDPRbtPLDD8 Dec 16 12:56:32.128000 audit[4892]: CRED_ACQ pid=4892 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:32.136684 kernel: audit: type=1103 audit(1765889792.128:770): pid=4892 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:32.136839 kernel: audit: type=1006 audit(1765889792.128:771): pid=4892 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Dec 16 12:56:32.142367 sshd-session[4892]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:56:32.151000 kernel: audit: type=1300 audit(1765889792.128:771): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcd179cc30 a2=3 a3=0 items=0 ppid=1 pid=4892 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:32.128000 audit[4892]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcd179cc30 a2=3 a3=0 items=0 ppid=1 pid=4892 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:32.158723 systemd-logind[1584]: New session 10 of user core. Dec 16 12:56:32.128000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:56:32.164653 kernel: audit: type=1327 audit(1765889792.128:771): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:56:32.169048 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 16 12:56:32.181966 kernel: audit: type=1105 audit(1765889792.172:772): pid=4892 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:32.172000 audit[4892]: USER_START pid=4892 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:32.183000 audit[4897]: CRED_ACQ pid=4897 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:32.196652 kernel: audit: type=1103 audit(1765889792.183:773): pid=4897 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:32.423736 sshd[4897]: Connection closed by 147.75.109.163 port 49068 Dec 16 12:56:32.422378 sshd-session[4892]: pam_unix(sshd:session): session closed for user core Dec 16 12:56:32.437156 kernel: audit: type=1106 audit(1765889792.423:774): pid=4892 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:32.423000 audit[4892]: USER_END pid=4892 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:32.425000 audit[4892]: CRED_DISP pid=4892 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:32.449826 kernel: audit: type=1104 audit(1765889792.425:775): pid=4892 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:32.450534 systemd[1]: sshd@9-164.90.155.252:22-147.75.109.163:49068.service: Deactivated successfully. Dec 16 12:56:32.450000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-164.90.155.252:22-147.75.109.163:49068 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:32.458105 systemd[1]: session-10.scope: Deactivated successfully. Dec 16 12:56:32.462668 systemd-logind[1584]: Session 10 logged out. Waiting for processes to exit. Dec 16 12:56:32.468000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-164.90.155.252:22-147.75.109.163:54450 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:32.470324 systemd[1]: Started sshd@10-164.90.155.252:22-147.75.109.163:54450.service - OpenSSH per-connection server daemon (147.75.109.163:54450). Dec 16 12:56:32.473736 systemd-logind[1584]: Removed session 10. Dec 16 12:56:32.570000 audit[4909]: USER_ACCT pid=4909 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:32.573799 sshd[4909]: Accepted publickey for core from 147.75.109.163 port 54450 ssh2: RSA SHA256:LU1qZA7a/A5pU4aT9e5vsRy+gIvYDHbmZDPRbtPLDD8 Dec 16 12:56:32.573000 audit[4909]: CRED_ACQ pid=4909 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:32.573000 audit[4909]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdd74ac8f0 a2=3 a3=0 items=0 ppid=1 pid=4909 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:32.573000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:56:32.576280 sshd-session[4909]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:56:32.591223 systemd-logind[1584]: New session 11 of user core. Dec 16 12:56:32.596012 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 16 12:56:32.599000 audit[4909]: USER_START pid=4909 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:32.604000 audit[4912]: CRED_ACQ pid=4912 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:32.872768 sshd[4912]: Connection closed by 147.75.109.163 port 54450 Dec 16 12:56:32.873885 sshd-session[4909]: pam_unix(sshd:session): session closed for user core Dec 16 12:56:32.877000 audit[4909]: USER_END pid=4909 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:32.877000 audit[4909]: CRED_DISP pid=4909 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:32.888257 systemd[1]: sshd@10-164.90.155.252:22-147.75.109.163:54450.service: Deactivated successfully. Dec 16 12:56:32.886000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-164.90.155.252:22-147.75.109.163:54450 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:32.894455 systemd[1]: session-11.scope: Deactivated successfully. Dec 16 12:56:32.899923 systemd-logind[1584]: Session 11 logged out. Waiting for processes to exit. Dec 16 12:56:32.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-164.90.155.252:22-147.75.109.163:54458 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:32.906132 systemd[1]: Started sshd@11-164.90.155.252:22-147.75.109.163:54458.service - OpenSSH per-connection server daemon (147.75.109.163:54458). Dec 16 12:56:32.909322 systemd-logind[1584]: Removed session 11. Dec 16 12:56:32.989000 audit[4922]: USER_ACCT pid=4922 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:32.991703 sshd[4922]: Accepted publickey for core from 147.75.109.163 port 54458 ssh2: RSA SHA256:LU1qZA7a/A5pU4aT9e5vsRy+gIvYDHbmZDPRbtPLDD8 Dec 16 12:56:32.991000 audit[4922]: CRED_ACQ pid=4922 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:32.991000 audit[4922]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffde89c51d0 a2=3 a3=0 items=0 ppid=1 pid=4922 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:32.991000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:56:32.994058 sshd-session[4922]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:56:33.003936 systemd-logind[1584]: New session 12 of user core. Dec 16 12:56:33.009932 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 16 12:56:33.014000 audit[4922]: USER_START pid=4922 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:33.018000 audit[4925]: CRED_ACQ pid=4925 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:33.151761 sshd[4925]: Connection closed by 147.75.109.163 port 54458 Dec 16 12:56:33.151817 sshd-session[4922]: pam_unix(sshd:session): session closed for user core Dec 16 12:56:33.152000 audit[4922]: USER_END pid=4922 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:33.152000 audit[4922]: CRED_DISP pid=4922 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:33.159364 systemd[1]: sshd@11-164.90.155.252:22-147.75.109.163:54458.service: Deactivated successfully. Dec 16 12:56:33.158000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-164.90.155.252:22-147.75.109.163:54458 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:33.164850 systemd[1]: session-12.scope: Deactivated successfully. Dec 16 12:56:33.166515 systemd-logind[1584]: Session 12 logged out. Waiting for processes to exit. Dec 16 12:56:33.169762 systemd-logind[1584]: Removed session 12. Dec 16 12:56:37.419695 kubelet[2789]: E1216 12:56:37.419570 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hvm4l" podUID="9d74f6fb-d2c4-41ff-9241-88dfaec31538" Dec 16 12:56:37.420963 kubelet[2789]: E1216 12:56:37.420879 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-675645944b-fkqz9" podUID="dd978ccd-4987-4640-957a-11962c9801ea" Dec 16 12:56:38.178806 kernel: kauditd_printk_skb: 23 callbacks suppressed Dec 16 12:56:38.178955 kernel: audit: type=1130 audit(1765889798.171:795): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-164.90.155.252:22-147.75.109.163:54460 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:38.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-164.90.155.252:22-147.75.109.163:54460 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:38.172991 systemd[1]: Started sshd@12-164.90.155.252:22-147.75.109.163:54460.service - OpenSSH per-connection server daemon (147.75.109.163:54460). Dec 16 12:56:38.270000 audit[4942]: USER_ACCT pid=4942 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:38.276712 kernel: audit: type=1101 audit(1765889798.270:796): pid=4942 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:38.274223 sshd-session[4942]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:56:38.277479 sshd[4942]: Accepted publickey for core from 147.75.109.163 port 54460 ssh2: RSA SHA256:LU1qZA7a/A5pU4aT9e5vsRy+gIvYDHbmZDPRbtPLDD8 Dec 16 12:56:38.284680 kernel: audit: type=1103 audit(1765889798.270:797): pid=4942 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:38.285739 kernel: audit: type=1006 audit(1765889798.270:798): pid=4942 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Dec 16 12:56:38.270000 audit[4942]: CRED_ACQ pid=4942 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:38.287669 kernel: audit: type=1300 audit(1765889798.270:798): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd6ec095d0 a2=3 a3=0 items=0 ppid=1 pid=4942 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:38.270000 audit[4942]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd6ec095d0 a2=3 a3=0 items=0 ppid=1 pid=4942 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:38.270000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:56:38.298359 systemd-logind[1584]: New session 13 of user core. Dec 16 12:56:38.298798 kernel: audit: type=1327 audit(1765889798.270:798): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:56:38.304980 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 16 12:56:38.310000 audit[4942]: USER_START pid=4942 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:38.318769 kernel: audit: type=1105 audit(1765889798.310:799): pid=4942 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:38.317000 audit[4945]: CRED_ACQ pid=4945 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:38.325838 kernel: audit: type=1103 audit(1765889798.317:800): pid=4945 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:38.422221 kubelet[2789]: E1216 12:56:38.420927 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f6f6f8cd5-nxqwt" podUID="4d5fd089-d56c-460c-b006-cc36a126ec32" Dec 16 12:56:38.422221 kubelet[2789]: E1216 12:56:38.421570 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 12:56:38.463824 sshd[4945]: Connection closed by 147.75.109.163 port 54460 Dec 16 12:56:38.464701 sshd-session[4942]: pam_unix(sshd:session): session closed for user core Dec 16 12:56:38.477380 kernel: audit: type=1106 audit(1765889798.466:801): pid=4942 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:38.466000 audit[4942]: USER_END pid=4942 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:38.474299 systemd[1]: sshd@12-164.90.155.252:22-147.75.109.163:54460.service: Deactivated successfully. Dec 16 12:56:38.476410 systemd[1]: session-13.scope: Deactivated successfully. Dec 16 12:56:38.481313 systemd-logind[1584]: Session 13 logged out. Waiting for processes to exit. Dec 16 12:56:38.482864 systemd-logind[1584]: Removed session 13. Dec 16 12:56:38.466000 audit[4942]: CRED_DISP pid=4942 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:38.472000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-164.90.155.252:22-147.75.109.163:54460 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:38.488774 kernel: audit: type=1104 audit(1765889798.466:802): pid=4942 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:39.421308 kubelet[2789]: E1216 12:56:39.421148 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rmjtf" podUID="7b89c039-0754-43bd-ad85-5506dee48dad" Dec 16 12:56:40.419375 kubelet[2789]: E1216 12:56:40.419318 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 12:56:40.423684 kubelet[2789]: E1216 12:56:40.422504 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f6f6f8cd5-qx8fz" podUID="5a9eaa6d-ffc5-496a-b44d-d7e196b6b18c" Dec 16 12:56:43.420016 kubelet[2789]: E1216 12:56:43.419897 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b46df5bf6-8vt25" podUID="eceb5ead-85dc-4ae8-98b5-b55994dab5ce" Dec 16 12:56:43.491755 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:56:43.491880 kernel: audit: type=1130 audit(1765889803.482:804): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-164.90.155.252:22-147.75.109.163:50654 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:43.482000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-164.90.155.252:22-147.75.109.163:50654 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:43.484119 systemd[1]: Started sshd@13-164.90.155.252:22-147.75.109.163:50654.service - OpenSSH per-connection server daemon (147.75.109.163:50654). Dec 16 12:56:43.634797 kernel: audit: type=1101 audit(1765889803.626:805): pid=4983 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:43.626000 audit[4983]: USER_ACCT pid=4983 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:43.632234 sshd-session[4983]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:56:43.635561 sshd[4983]: Accepted publickey for core from 147.75.109.163 port 50654 ssh2: RSA SHA256:LU1qZA7a/A5pU4aT9e5vsRy+gIvYDHbmZDPRbtPLDD8 Dec 16 12:56:43.640726 kernel: audit: type=1103 audit(1765889803.629:806): pid=4983 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:43.629000 audit[4983]: CRED_ACQ pid=4983 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:43.650110 kernel: audit: type=1006 audit(1765889803.629:807): pid=4983 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Dec 16 12:56:43.650291 kernel: audit: type=1300 audit(1765889803.629:807): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff16883bc0 a2=3 a3=0 items=0 ppid=1 pid=4983 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:43.629000 audit[4983]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff16883bc0 a2=3 a3=0 items=0 ppid=1 pid=4983 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:43.652584 kernel: audit: type=1327 audit(1765889803.629:807): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:56:43.629000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:56:43.657099 systemd-logind[1584]: New session 14 of user core. Dec 16 12:56:43.659013 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 16 12:56:43.664000 audit[4983]: USER_START pid=4983 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:43.671697 kernel: audit: type=1105 audit(1765889803.664:808): pid=4983 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:43.673000 audit[4986]: CRED_ACQ pid=4986 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:43.681681 kernel: audit: type=1103 audit(1765889803.673:809): pid=4986 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:43.843274 sshd[4986]: Connection closed by 147.75.109.163 port 50654 Dec 16 12:56:43.845905 sshd-session[4983]: pam_unix(sshd:session): session closed for user core Dec 16 12:56:43.848000 audit[4983]: USER_END pid=4983 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:43.856676 kernel: audit: type=1106 audit(1765889803.848:810): pid=4983 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:43.858318 systemd[1]: sshd@13-164.90.155.252:22-147.75.109.163:50654.service: Deactivated successfully. Dec 16 12:56:43.848000 audit[4983]: CRED_DISP pid=4983 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:43.866676 kernel: audit: type=1104 audit(1765889803.848:811): pid=4983 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:43.869386 systemd[1]: session-14.scope: Deactivated successfully. Dec 16 12:56:43.870847 systemd-logind[1584]: Session 14 logged out. Waiting for processes to exit. Dec 16 12:56:43.875416 systemd-logind[1584]: Removed session 14. Dec 16 12:56:43.857000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-164.90.155.252:22-147.75.109.163:50654 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:48.422125 containerd[1617]: time="2025-12-16T12:56:48.422073722Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:56:48.764048 containerd[1617]: time="2025-12-16T12:56:48.763981654Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:56:48.765339 containerd[1617]: time="2025-12-16T12:56:48.765259840Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:56:48.765339 containerd[1617]: time="2025-12-16T12:56:48.765306442Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 12:56:48.765794 kubelet[2789]: E1216 12:56:48.765739 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:56:48.766333 kubelet[2789]: E1216 12:56:48.765817 2789 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:56:48.766333 kubelet[2789]: E1216 12:56:48.766017 2789 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:b2c13645dac8477e847d7b1cce258193,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hqmlr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-675645944b-fkqz9_calico-system(dd978ccd-4987-4640-957a-11962c9801ea): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:56:48.770264 containerd[1617]: time="2025-12-16T12:56:48.770221078Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:56:48.860664 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:56:48.860807 kernel: audit: type=1130 audit(1765889808.857:813): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-164.90.155.252:22-147.75.109.163:50666 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:48.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-164.90.155.252:22-147.75.109.163:50666 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:48.858750 systemd[1]: Started sshd@14-164.90.155.252:22-147.75.109.163:50666.service - OpenSSH per-connection server daemon (147.75.109.163:50666). Dec 16 12:56:48.934000 audit[5001]: USER_ACCT pid=5001 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:48.939718 sshd[5001]: Accepted publickey for core from 147.75.109.163 port 50666 ssh2: RSA SHA256:LU1qZA7a/A5pU4aT9e5vsRy+gIvYDHbmZDPRbtPLDD8 Dec 16 12:56:48.940662 kernel: audit: type=1101 audit(1765889808.934:814): pid=5001 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:48.940893 sshd-session[5001]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:56:48.938000 audit[5001]: CRED_ACQ pid=5001 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:48.948782 kernel: audit: type=1103 audit(1765889808.938:815): pid=5001 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:48.952578 systemd-logind[1584]: New session 15 of user core. Dec 16 12:56:48.957678 kernel: audit: type=1006 audit(1765889808.938:816): pid=5001 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Dec 16 12:56:48.958912 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 16 12:56:48.938000 audit[5001]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe62bf3cb0 a2=3 a3=0 items=0 ppid=1 pid=5001 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:48.963660 kernel: audit: type=1300 audit(1765889808.938:816): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe62bf3cb0 a2=3 a3=0 items=0 ppid=1 pid=5001 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:48.938000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:56:48.968680 kernel: audit: type=1327 audit(1765889808.938:816): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:56:48.963000 audit[5001]: USER_START pid=5001 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:48.974194 kernel: audit: type=1105 audit(1765889808.963:817): pid=5001 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:48.974325 kernel: audit: type=1103 audit(1765889808.967:818): pid=5008 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:48.967000 audit[5008]: CRED_ACQ pid=5008 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:49.082910 containerd[1617]: time="2025-12-16T12:56:49.082776250Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:56:49.083902 containerd[1617]: time="2025-12-16T12:56:49.083794507Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:56:49.084031 containerd[1617]: time="2025-12-16T12:56:49.083955866Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 12:56:49.084646 kubelet[2789]: E1216 12:56:49.084176 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:56:49.084646 kubelet[2789]: E1216 12:56:49.084242 2789 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:56:49.085421 kubelet[2789]: E1216 12:56:49.084788 2789 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hqmlr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-675645944b-fkqz9_calico-system(dd978ccd-4987-4640-957a-11962c9801ea): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:56:49.086714 kubelet[2789]: E1216 12:56:49.086398 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-675645944b-fkqz9" podUID="dd978ccd-4987-4640-957a-11962c9801ea" Dec 16 12:56:49.117576 sshd[5008]: Connection closed by 147.75.109.163 port 50666 Dec 16 12:56:49.118904 sshd-session[5001]: pam_unix(sshd:session): session closed for user core Dec 16 12:56:49.121000 audit[5001]: USER_END pid=5001 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:49.128586 kernel: audit: type=1106 audit(1765889809.121:819): pid=5001 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:49.128057 systemd[1]: sshd@14-164.90.155.252:22-147.75.109.163:50666.service: Deactivated successfully. Dec 16 12:56:49.130512 systemd[1]: session-15.scope: Deactivated successfully. Dec 16 12:56:49.121000 audit[5001]: CRED_DISP pid=5001 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:49.135769 kernel: audit: type=1104 audit(1765889809.121:820): pid=5001 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:49.136128 systemd-logind[1584]: Session 15 logged out. Waiting for processes to exit. Dec 16 12:56:49.126000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-164.90.155.252:22-147.75.109.163:50666 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:49.142151 systemd-logind[1584]: Removed session 15. Dec 16 12:56:49.419746 containerd[1617]: time="2025-12-16T12:56:49.418407921Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:56:49.737006 containerd[1617]: time="2025-12-16T12:56:49.736945109Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:56:49.737727 containerd[1617]: time="2025-12-16T12:56:49.737684707Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:56:49.737727 containerd[1617]: time="2025-12-16T12:56:49.737753304Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 12:56:49.738209 kubelet[2789]: E1216 12:56:49.738153 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:56:49.738663 kubelet[2789]: E1216 12:56:49.738228 2789 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:56:49.738663 kubelet[2789]: E1216 12:56:49.738405 2789 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vwjlh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-hvm4l_calico-system(9d74f6fb-d2c4-41ff-9241-88dfaec31538): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:56:49.740064 kubelet[2789]: E1216 12:56:49.740003 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hvm4l" podUID="9d74f6fb-d2c4-41ff-9241-88dfaec31538" Dec 16 12:56:50.420223 containerd[1617]: time="2025-12-16T12:56:50.420169246Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:56:50.733255 containerd[1617]: time="2025-12-16T12:56:50.733199679Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:56:50.734038 containerd[1617]: time="2025-12-16T12:56:50.733984136Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:56:50.734145 containerd[1617]: time="2025-12-16T12:56:50.734104408Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:56:50.735272 kubelet[2789]: E1216 12:56:50.735181 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:56:50.736478 kubelet[2789]: E1216 12:56:50.735292 2789 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:56:50.736478 kubelet[2789]: E1216 12:56:50.735576 2789 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l6wnn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5f6f6f8cd5-nxqwt_calico-apiserver(4d5fd089-d56c-460c-b006-cc36a126ec32): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:56:50.737398 kubelet[2789]: E1216 12:56:50.737326 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f6f6f8cd5-nxqwt" podUID="4d5fd089-d56c-460c-b006-cc36a126ec32" Dec 16 12:56:51.420579 containerd[1617]: time="2025-12-16T12:56:51.419912803Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:56:51.747357 containerd[1617]: time="2025-12-16T12:56:51.747281920Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:56:51.759256 containerd[1617]: time="2025-12-16T12:56:51.758890055Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:56:51.759256 containerd[1617]: time="2025-12-16T12:56:51.758975720Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 12:56:51.759801 kubelet[2789]: E1216 12:56:51.759709 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:56:51.760478 kubelet[2789]: E1216 12:56:51.759799 2789 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:56:51.760478 kubelet[2789]: E1216 12:56:51.760005 2789 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lwc8p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rmjtf_calico-system(7b89c039-0754-43bd-ad85-5506dee48dad): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:56:51.763384 containerd[1617]: time="2025-12-16T12:56:51.763267844Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:56:52.087016 containerd[1617]: time="2025-12-16T12:56:52.086789287Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:56:52.087779 containerd[1617]: time="2025-12-16T12:56:52.087699342Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:56:52.087779 containerd[1617]: time="2025-12-16T12:56:52.087740123Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 12:56:52.088146 kubelet[2789]: E1216 12:56:52.088093 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:56:52.088507 kubelet[2789]: E1216 12:56:52.088341 2789 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:56:52.088932 kubelet[2789]: E1216 12:56:52.088873 2789 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lwc8p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rmjtf_calico-system(7b89c039-0754-43bd-ad85-5506dee48dad): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:56:52.090545 kubelet[2789]: E1216 12:56:52.090505 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rmjtf" podUID="7b89c039-0754-43bd-ad85-5506dee48dad" Dec 16 12:56:54.134838 systemd[1]: Started sshd@15-164.90.155.252:22-147.75.109.163:44714.service - OpenSSH per-connection server daemon (147.75.109.163:44714). Dec 16 12:56:54.139732 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:56:54.139819 kernel: audit: type=1130 audit(1765889814.133:822): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-164.90.155.252:22-147.75.109.163:44714 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:54.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-164.90.155.252:22-147.75.109.163:44714 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:54.226262 sshd[5021]: Accepted publickey for core from 147.75.109.163 port 44714 ssh2: RSA SHA256:LU1qZA7a/A5pU4aT9e5vsRy+gIvYDHbmZDPRbtPLDD8 Dec 16 12:56:54.223000 audit[5021]: USER_ACCT pid=5021 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:54.232669 kernel: audit: type=1101 audit(1765889814.223:823): pid=5021 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:54.232816 kernel: audit: type=1103 audit(1765889814.229:824): pid=5021 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:54.229000 audit[5021]: CRED_ACQ pid=5021 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:54.231656 sshd-session[5021]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:56:54.240680 kernel: audit: type=1006 audit(1765889814.229:825): pid=5021 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Dec 16 12:56:54.240822 kernel: audit: type=1300 audit(1765889814.229:825): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff86d6f080 a2=3 a3=0 items=0 ppid=1 pid=5021 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:54.229000 audit[5021]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff86d6f080 a2=3 a3=0 items=0 ppid=1 pid=5021 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:54.229000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:56:54.247817 kernel: audit: type=1327 audit(1765889814.229:825): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:56:54.255231 systemd-logind[1584]: New session 16 of user core. Dec 16 12:56:54.258929 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 16 12:56:54.274891 kernel: audit: type=1105 audit(1765889814.267:826): pid=5021 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:54.267000 audit[5021]: USER_START pid=5021 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:54.274000 audit[5025]: CRED_ACQ pid=5025 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:54.289707 kernel: audit: type=1103 audit(1765889814.274:827): pid=5025 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:54.403823 sshd[5025]: Connection closed by 147.75.109.163 port 44714 Dec 16 12:56:54.404528 sshd-session[5021]: pam_unix(sshd:session): session closed for user core Dec 16 12:56:54.407000 audit[5021]: USER_END pid=5021 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:54.416683 kernel: audit: type=1106 audit(1765889814.407:828): pid=5021 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:54.417521 kubelet[2789]: E1216 12:56:54.417467 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 12:56:54.408000 audit[5021]: CRED_DISP pid=5021 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:54.429669 kernel: audit: type=1104 audit(1765889814.408:829): pid=5021 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:54.430292 systemd[1]: sshd@15-164.90.155.252:22-147.75.109.163:44714.service: Deactivated successfully. Dec 16 12:56:54.434752 containerd[1617]: time="2025-12-16T12:56:54.432276165Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:56:54.433850 systemd[1]: session-16.scope: Deactivated successfully. Dec 16 12:56:54.428000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-164.90.155.252:22-147.75.109.163:44714 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:54.441324 systemd-logind[1584]: Session 16 logged out. Waiting for processes to exit. Dec 16 12:56:54.447240 systemd[1]: Started sshd@16-164.90.155.252:22-147.75.109.163:44728.service - OpenSSH per-connection server daemon (147.75.109.163:44728). Dec 16 12:56:54.446000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-164.90.155.252:22-147.75.109.163:44728 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:54.450127 systemd-logind[1584]: Removed session 16. Dec 16 12:56:54.536000 audit[5037]: USER_ACCT pid=5037 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:54.538563 sshd[5037]: Accepted publickey for core from 147.75.109.163 port 44728 ssh2: RSA SHA256:LU1qZA7a/A5pU4aT9e5vsRy+gIvYDHbmZDPRbtPLDD8 Dec 16 12:56:54.539000 audit[5037]: CRED_ACQ pid=5037 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:54.539000 audit[5037]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff063754f0 a2=3 a3=0 items=0 ppid=1 pid=5037 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:54.539000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:56:54.542273 sshd-session[5037]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:56:54.557725 systemd-logind[1584]: New session 17 of user core. Dec 16 12:56:54.562989 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 16 12:56:54.566000 audit[5037]: USER_START pid=5037 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:54.572000 audit[5040]: CRED_ACQ pid=5040 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:54.780695 containerd[1617]: time="2025-12-16T12:56:54.779792846Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:56:54.783364 containerd[1617]: time="2025-12-16T12:56:54.783184233Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:56:54.783364 containerd[1617]: time="2025-12-16T12:56:54.783320568Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 12:56:54.785088 kubelet[2789]: E1216 12:56:54.784946 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:56:54.786603 kubelet[2789]: E1216 12:56:54.786390 2789 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:56:54.788973 kubelet[2789]: E1216 12:56:54.788843 2789 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qrzn7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7b46df5bf6-8vt25_calico-system(eceb5ead-85dc-4ae8-98b5-b55994dab5ce): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:56:54.790181 kubelet[2789]: E1216 12:56:54.790110 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b46df5bf6-8vt25" podUID="eceb5ead-85dc-4ae8-98b5-b55994dab5ce" Dec 16 12:56:54.933494 sshd[5040]: Connection closed by 147.75.109.163 port 44728 Dec 16 12:56:54.940349 sshd-session[5037]: pam_unix(sshd:session): session closed for user core Dec 16 12:56:54.952342 systemd[1]: Started sshd@17-164.90.155.252:22-147.75.109.163:44734.service - OpenSSH per-connection server daemon (147.75.109.163:44734). Dec 16 12:56:54.951000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-164.90.155.252:22-147.75.109.163:44734 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:54.957000 audit[5037]: USER_END pid=5037 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:54.960000 audit[5037]: CRED_DISP pid=5037 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:54.971680 systemd[1]: sshd@16-164.90.155.252:22-147.75.109.163:44728.service: Deactivated successfully. Dec 16 12:56:54.972000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-164.90.155.252:22-147.75.109.163:44728 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:54.979267 systemd[1]: session-17.scope: Deactivated successfully. Dec 16 12:56:54.987477 systemd-logind[1584]: Session 17 logged out. Waiting for processes to exit. Dec 16 12:56:54.993374 systemd-logind[1584]: Removed session 17. Dec 16 12:56:55.080000 audit[5047]: USER_ACCT pid=5047 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:55.084682 sshd[5047]: Accepted publickey for core from 147.75.109.163 port 44734 ssh2: RSA SHA256:LU1qZA7a/A5pU4aT9e5vsRy+gIvYDHbmZDPRbtPLDD8 Dec 16 12:56:55.085000 audit[5047]: CRED_ACQ pid=5047 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:55.085000 audit[5047]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc7947d170 a2=3 a3=0 items=0 ppid=1 pid=5047 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:55.085000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:56:55.088762 sshd-session[5047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:56:55.101311 systemd-logind[1584]: New session 18 of user core. Dec 16 12:56:55.110074 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 16 12:56:55.115000 audit[5047]: USER_START pid=5047 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:55.119000 audit[5053]: CRED_ACQ pid=5053 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:55.423237 containerd[1617]: time="2025-12-16T12:56:55.422786314Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:56:55.823009 containerd[1617]: time="2025-12-16T12:56:55.822830830Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:56:55.823968 containerd[1617]: time="2025-12-16T12:56:55.823828237Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:56:55.823968 containerd[1617]: time="2025-12-16T12:56:55.823935206Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:56:55.824758 kubelet[2789]: E1216 12:56:55.824703 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:56:55.825350 kubelet[2789]: E1216 12:56:55.824779 2789 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:56:55.825350 kubelet[2789]: E1216 12:56:55.824998 2789 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-plx6k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5f6f6f8cd5-qx8fz_calico-apiserver(5a9eaa6d-ffc5-496a-b44d-d7e196b6b18c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:56:55.826877 kubelet[2789]: E1216 12:56:55.826385 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f6f6f8cd5-qx8fz" podUID="5a9eaa6d-ffc5-496a-b44d-d7e196b6b18c" Dec 16 12:56:55.885667 sshd[5053]: Connection closed by 147.75.109.163 port 44734 Dec 16 12:56:55.885580 sshd-session[5047]: pam_unix(sshd:session): session closed for user core Dec 16 12:56:55.896000 audit[5047]: USER_END pid=5047 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:55.896000 audit[5047]: CRED_DISP pid=5047 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:55.906333 systemd[1]: sshd@17-164.90.155.252:22-147.75.109.163:44734.service: Deactivated successfully. Dec 16 12:56:55.905000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-164.90.155.252:22-147.75.109.163:44734 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:55.910910 systemd[1]: session-18.scope: Deactivated successfully. Dec 16 12:56:55.915967 systemd-logind[1584]: Session 18 logged out. Waiting for processes to exit. Dec 16 12:56:55.918000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-164.90.155.252:22-147.75.109.163:44738 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:55.919612 systemd[1]: Started sshd@18-164.90.155.252:22-147.75.109.163:44738.service - OpenSSH per-connection server daemon (147.75.109.163:44738). Dec 16 12:56:55.925443 systemd-logind[1584]: Removed session 18. Dec 16 12:56:55.949000 audit[5064]: NETFILTER_CFG table=filter:144 family=2 entries=26 op=nft_register_rule pid=5064 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:56:55.949000 audit[5064]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7fff222e2fe0 a2=0 a3=7fff222e2fcc items=0 ppid=2936 pid=5064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:55.949000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:56:55.964000 audit[5064]: NETFILTER_CFG table=nat:145 family=2 entries=20 op=nft_register_rule pid=5064 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:56:55.964000 audit[5064]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff222e2fe0 a2=0 a3=0 items=0 ppid=2936 pid=5064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:55.964000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:56:56.030000 audit[5073]: NETFILTER_CFG table=filter:146 family=2 entries=38 op=nft_register_rule pid=5073 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:56:56.030000 audit[5073]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7fffb2c5ca50 a2=0 a3=7fffb2c5ca3c items=0 ppid=2936 pid=5073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:56.030000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:56:56.034321 sshd[5069]: Accepted publickey for core from 147.75.109.163 port 44738 ssh2: RSA SHA256:LU1qZA7a/A5pU4aT9e5vsRy+gIvYDHbmZDPRbtPLDD8 Dec 16 12:56:56.032000 audit[5069]: USER_ACCT pid=5069 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:56.034000 audit[5069]: CRED_ACQ pid=5069 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:56.034000 audit[5069]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcb15ed720 a2=3 a3=0 items=0 ppid=1 pid=5069 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:56.034000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:56:56.036942 sshd-session[5069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:56:56.037000 audit[5073]: NETFILTER_CFG table=nat:147 family=2 entries=20 op=nft_register_rule pid=5073 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:56:56.037000 audit[5073]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fffb2c5ca50 a2=0 a3=0 items=0 ppid=2936 pid=5073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:56.037000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:56:56.046664 systemd-logind[1584]: New session 19 of user core. Dec 16 12:56:56.057004 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 16 12:56:56.061000 audit[5069]: USER_START pid=5069 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:56.064000 audit[5074]: CRED_ACQ pid=5074 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:56.751751 sshd[5074]: Connection closed by 147.75.109.163 port 44738 Dec 16 12:56:56.752543 sshd-session[5069]: pam_unix(sshd:session): session closed for user core Dec 16 12:56:56.753000 audit[5069]: USER_END pid=5069 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:56.753000 audit[5069]: CRED_DISP pid=5069 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:56.769797 systemd[1]: sshd@18-164.90.155.252:22-147.75.109.163:44738.service: Deactivated successfully. Dec 16 12:56:56.768000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-164.90.155.252:22-147.75.109.163:44738 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:56.775238 systemd[1]: session-19.scope: Deactivated successfully. Dec 16 12:56:56.781470 systemd-logind[1584]: Session 19 logged out. Waiting for processes to exit. Dec 16 12:56:56.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-164.90.155.252:22-147.75.109.163:44750 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:56.789872 systemd[1]: Started sshd@19-164.90.155.252:22-147.75.109.163:44750.service - OpenSSH per-connection server daemon (147.75.109.163:44750). Dec 16 12:56:56.802569 systemd-logind[1584]: Removed session 19. Dec 16 12:56:56.911176 sshd[5084]: Accepted publickey for core from 147.75.109.163 port 44750 ssh2: RSA SHA256:LU1qZA7a/A5pU4aT9e5vsRy+gIvYDHbmZDPRbtPLDD8 Dec 16 12:56:56.909000 audit[5084]: USER_ACCT pid=5084 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:56.911000 audit[5084]: CRED_ACQ pid=5084 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:56.911000 audit[5084]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffbd4cbb50 a2=3 a3=0 items=0 ppid=1 pid=5084 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:56:56.911000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:56:56.913607 sshd-session[5084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:56:56.927692 systemd-logind[1584]: New session 20 of user core. Dec 16 12:56:56.931165 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 16 12:56:56.936000 audit[5084]: USER_START pid=5084 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:56.940000 audit[5087]: CRED_ACQ pid=5087 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:57.095029 sshd[5087]: Connection closed by 147.75.109.163 port 44750 Dec 16 12:56:57.095622 sshd-session[5084]: pam_unix(sshd:session): session closed for user core Dec 16 12:56:57.098000 audit[5084]: USER_END pid=5084 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:57.099000 audit[5084]: CRED_DISP pid=5084 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:56:57.104583 systemd[1]: sshd@19-164.90.155.252:22-147.75.109.163:44750.service: Deactivated successfully. Dec 16 12:56:57.103000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-164.90.155.252:22-147.75.109.163:44750 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:56:57.109180 systemd[1]: session-20.scope: Deactivated successfully. Dec 16 12:56:57.110974 systemd-logind[1584]: Session 20 logged out. Waiting for processes to exit. Dec 16 12:56:57.113306 systemd-logind[1584]: Removed session 20. Dec 16 12:57:00.428347 kubelet[2789]: E1216 12:57:00.428240 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-675645944b-fkqz9" podUID="dd978ccd-4987-4640-957a-11962c9801ea" Dec 16 12:57:01.418378 kubelet[2789]: E1216 12:57:01.418324 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 12:57:01.421574 kubelet[2789]: E1216 12:57:01.421469 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hvm4l" podUID="9d74f6fb-d2c4-41ff-9241-88dfaec31538" Dec 16 12:57:02.113282 systemd[1]: Started sshd@20-164.90.155.252:22-147.75.109.163:44754.service - OpenSSH per-connection server daemon (147.75.109.163:44754). Dec 16 12:57:02.130893 kernel: kauditd_printk_skb: 57 callbacks suppressed Dec 16 12:57:02.130993 kernel: audit: type=1130 audit(1765889822.111:871): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-164.90.155.252:22-147.75.109.163:44754 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:02.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-164.90.155.252:22-147.75.109.163:44754 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:02.302358 kernel: audit: type=1101 audit(1765889822.295:872): pid=5101 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:57:02.295000 audit[5101]: USER_ACCT pid=5101 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:57:02.302620 sshd[5101]: Accepted publickey for core from 147.75.109.163 port 44754 ssh2: RSA SHA256:LU1qZA7a/A5pU4aT9e5vsRy+gIvYDHbmZDPRbtPLDD8 Dec 16 12:57:02.303489 sshd-session[5101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:57:02.295000 audit[5101]: CRED_ACQ pid=5101 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:57:02.311839 kernel: audit: type=1103 audit(1765889822.295:873): pid=5101 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:57:02.314578 systemd-logind[1584]: New session 21 of user core. Dec 16 12:57:02.318680 kernel: audit: type=1006 audit(1765889822.295:874): pid=5101 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Dec 16 12:57:02.295000 audit[5101]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffffc39ed90 a2=3 a3=0 items=0 ppid=1 pid=5101 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:02.327381 kernel: audit: type=1300 audit(1765889822.295:874): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffffc39ed90 a2=3 a3=0 items=0 ppid=1 pid=5101 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:02.321213 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 16 12:57:02.295000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:57:02.332668 kernel: audit: type=1327 audit(1765889822.295:874): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:57:02.326000 audit[5101]: USER_START pid=5101 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:57:02.339020 kernel: audit: type=1105 audit(1765889822.326:875): pid=5101 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:57:02.330000 audit[5105]: CRED_ACQ pid=5105 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:57:02.349944 kernel: audit: type=1103 audit(1765889822.330:876): pid=5105 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:57:02.381000 audit[5109]: NETFILTER_CFG table=filter:148 family=2 entries=26 op=nft_register_rule pid=5109 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:57:02.392412 kernel: audit: type=1325 audit(1765889822.381:877): table=filter:148 family=2 entries=26 op=nft_register_rule pid=5109 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:57:02.392582 kernel: audit: type=1300 audit(1765889822.381:877): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe767d0a00 a2=0 a3=7ffe767d09ec items=0 ppid=2936 pid=5109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:02.381000 audit[5109]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe767d0a00 a2=0 a3=7ffe767d09ec items=0 ppid=2936 pid=5109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:02.381000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:57:02.397000 audit[5109]: NETFILTER_CFG table=nat:149 family=2 entries=104 op=nft_register_chain pid=5109 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:57:02.397000 audit[5109]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffe767d0a00 a2=0 a3=7ffe767d09ec items=0 ppid=2936 pid=5109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:02.397000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:57:02.584740 sshd[5105]: Connection closed by 147.75.109.163 port 44754 Dec 16 12:57:02.585856 sshd-session[5101]: pam_unix(sshd:session): session closed for user core Dec 16 12:57:02.590000 audit[5101]: USER_END pid=5101 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:57:02.591000 audit[5101]: CRED_DISP pid=5101 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:57:02.597180 systemd[1]: sshd@20-164.90.155.252:22-147.75.109.163:44754.service: Deactivated successfully. Dec 16 12:57:02.596000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-164.90.155.252:22-147.75.109.163:44754 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:02.602572 systemd[1]: session-21.scope: Deactivated successfully. Dec 16 12:57:02.605867 systemd-logind[1584]: Session 21 logged out. Waiting for processes to exit. Dec 16 12:57:02.610257 systemd-logind[1584]: Removed session 21. Dec 16 12:57:04.423369 kubelet[2789]: E1216 12:57:04.423212 2789 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 12:57:04.425673 kubelet[2789]: E1216 12:57:04.425587 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rmjtf" podUID="7b89c039-0754-43bd-ad85-5506dee48dad" Dec 16 12:57:05.419929 kubelet[2789]: E1216 12:57:05.419859 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f6f6f8cd5-nxqwt" podUID="4d5fd089-d56c-460c-b006-cc36a126ec32" Dec 16 12:57:07.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-164.90.155.252:22-147.75.109.163:38244 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:07.611146 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 16 12:57:07.611210 kernel: audit: type=1130 audit(1765889827.605:882): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-164.90.155.252:22-147.75.109.163:38244 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:07.607317 systemd[1]: Started sshd@21-164.90.155.252:22-147.75.109.163:38244.service - OpenSSH per-connection server daemon (147.75.109.163:38244). Dec 16 12:57:07.717008 sshd[5118]: Accepted publickey for core from 147.75.109.163 port 38244 ssh2: RSA SHA256:LU1qZA7a/A5pU4aT9e5vsRy+gIvYDHbmZDPRbtPLDD8 Dec 16 12:57:07.715000 audit[5118]: USER_ACCT pid=5118 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:57:07.721676 kernel: audit: type=1101 audit(1765889827.715:883): pid=5118 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:57:07.723207 sshd-session[5118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:57:07.721000 audit[5118]: CRED_ACQ pid=5118 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:57:07.729676 kernel: audit: type=1103 audit(1765889827.721:884): pid=5118 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:57:07.734670 kernel: audit: type=1006 audit(1765889827.721:885): pid=5118 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Dec 16 12:57:07.721000 audit[5118]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc4f90f210 a2=3 a3=0 items=0 ppid=1 pid=5118 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:07.742470 kernel: audit: type=1300 audit(1765889827.721:885): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc4f90f210 a2=3 a3=0 items=0 ppid=1 pid=5118 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:07.742584 kernel: audit: type=1327 audit(1765889827.721:885): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:57:07.721000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:57:07.737926 systemd-logind[1584]: New session 22 of user core. Dec 16 12:57:07.746884 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 16 12:57:07.749000 audit[5118]: USER_START pid=5118 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:57:07.756669 kernel: audit: type=1105 audit(1765889827.749:886): pid=5118 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:57:07.759000 audit[5121]: CRED_ACQ pid=5121 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:57:07.765669 kernel: audit: type=1103 audit(1765889827.759:887): pid=5121 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:57:07.949955 sshd[5121]: Connection closed by 147.75.109.163 port 38244 Dec 16 12:57:07.949188 sshd-session[5118]: pam_unix(sshd:session): session closed for user core Dec 16 12:57:07.952000 audit[5118]: USER_END pid=5118 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:57:07.958669 kernel: audit: type=1106 audit(1765889827.952:888): pid=5118 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:57:07.962460 systemd[1]: sshd@21-164.90.155.252:22-147.75.109.163:38244.service: Deactivated successfully. Dec 16 12:57:07.957000 audit[5118]: CRED_DISP pid=5118 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:57:07.963000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-164.90.155.252:22-147.75.109.163:38244 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:07.967718 kernel: audit: type=1104 audit(1765889827.957:889): pid=5118 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:57:07.970088 systemd[1]: session-22.scope: Deactivated successfully. Dec 16 12:57:07.973918 systemd-logind[1584]: Session 22 logged out. Waiting for processes to exit. Dec 16 12:57:07.976175 systemd-logind[1584]: Removed session 22. Dec 16 12:57:10.420784 kubelet[2789]: E1216 12:57:10.420727 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f6f6f8cd5-qx8fz" podUID="5a9eaa6d-ffc5-496a-b44d-d7e196b6b18c" Dec 16 12:57:10.421396 kubelet[2789]: E1216 12:57:10.421193 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b46df5bf6-8vt25" podUID="eceb5ead-85dc-4ae8-98b5-b55994dab5ce" Dec 16 12:57:12.963706 systemd[1]: Started sshd@22-164.90.155.252:22-147.75.109.163:36828.service - OpenSSH per-connection server daemon (147.75.109.163:36828). Dec 16 12:57:12.967472 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:57:12.967929 kernel: audit: type=1130 audit(1765889832.963:891): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-164.90.155.252:22-147.75.109.163:36828 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:12.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-164.90.155.252:22-147.75.109.163:36828 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:13.049000 audit[5160]: USER_ACCT pid=5160 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:57:13.050834 sshd[5160]: Accepted publickey for core from 147.75.109.163 port 36828 ssh2: RSA SHA256:LU1qZA7a/A5pU4aT9e5vsRy+gIvYDHbmZDPRbtPLDD8 Dec 16 12:57:13.052672 sshd-session[5160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:57:13.054678 kernel: audit: type=1101 audit(1765889833.049:892): pid=5160 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:57:13.051000 audit[5160]: CRED_ACQ pid=5160 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:57:13.063672 kernel: audit: type=1103 audit(1765889833.051:893): pid=5160 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:57:13.066727 systemd-logind[1584]: New session 23 of user core. Dec 16 12:57:13.071654 kernel: audit: type=1006 audit(1765889833.051:894): pid=5160 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Dec 16 12:57:13.051000 audit[5160]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff09de9770 a2=3 a3=0 items=0 ppid=1 pid=5160 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:13.051000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:57:13.076950 kernel: audit: type=1300 audit(1765889833.051:894): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff09de9770 a2=3 a3=0 items=0 ppid=1 pid=5160 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:13.077080 kernel: audit: type=1327 audit(1765889833.051:894): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:57:13.079897 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 16 12:57:13.092000 audit[5160]: USER_START pid=5160 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:57:13.101074 kernel: audit: type=1105 audit(1765889833.092:895): pid=5160 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:57:13.099000 audit[5163]: CRED_ACQ pid=5163 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:57:13.107900 kernel: audit: type=1103 audit(1765889833.099:896): pid=5163 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:57:13.250670 sshd[5163]: Connection closed by 147.75.109.163 port 36828 Dec 16 12:57:13.251720 sshd-session[5160]: pam_unix(sshd:session): session closed for user core Dec 16 12:57:13.254000 audit[5160]: USER_END pid=5160 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:57:13.259661 kernel: audit: type=1106 audit(1765889833.254:897): pid=5160 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:57:13.259613 systemd[1]: sshd@22-164.90.155.252:22-147.75.109.163:36828.service: Deactivated successfully. Dec 16 12:57:13.263718 systemd[1]: session-23.scope: Deactivated successfully. Dec 16 12:57:13.265309 systemd-logind[1584]: Session 23 logged out. Waiting for processes to exit. Dec 16 12:57:13.254000 audit[5160]: CRED_DISP pid=5160 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:57:13.271687 kernel: audit: type=1104 audit(1765889833.254:898): pid=5160 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:57:13.259000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-164.90.155.252:22-147.75.109.163:36828 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:13.272672 systemd-logind[1584]: Removed session 23. Dec 16 12:57:15.420391 kubelet[2789]: E1216 12:57:15.419653 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hvm4l" podUID="9d74f6fb-d2c4-41ff-9241-88dfaec31538" Dec 16 12:57:15.421417 kubelet[2789]: E1216 12:57:15.421364 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-675645944b-fkqz9" podUID="dd978ccd-4987-4640-957a-11962c9801ea" Dec 16 12:57:16.427155 kubelet[2789]: E1216 12:57:16.424312 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rmjtf" podUID="7b89c039-0754-43bd-ad85-5506dee48dad"