Dec 12 18:18:27.258444 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 12 15:17:57 -00 2025 Dec 12 18:18:27.258503 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=4dd8de2ff094d97322e7371b16ddee5fc8348868bcdd9ec7bcd11ea9d3933fee Dec 12 18:18:27.258530 kernel: BIOS-provided physical RAM map: Dec 12 18:18:27.258545 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 12 18:18:27.258566 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 12 18:18:27.258578 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 12 18:18:27.258594 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Dec 12 18:18:27.258613 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Dec 12 18:18:27.258625 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 12 18:18:27.258637 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 12 18:18:27.258662 kernel: NX (Execute Disable) protection: active Dec 12 18:18:27.258675 kernel: APIC: Static calls initialized Dec 12 18:18:27.258689 kernel: SMBIOS 2.8 present. Dec 12 18:18:27.258704 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Dec 12 18:18:27.258721 kernel: DMI: Memory slots populated: 1/1 Dec 12 18:18:27.258744 kernel: Hypervisor detected: KVM Dec 12 18:18:27.258762 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Dec 12 18:18:27.258776 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 12 18:18:27.258791 kernel: kvm-clock: using sched offset of 4444383504 cycles Dec 12 18:18:27.258806 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 12 18:18:27.258821 kernel: tsc: Detected 2294.608 MHz processor Dec 12 18:18:27.258837 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 12 18:18:27.258853 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 12 18:18:27.258877 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Dec 12 18:18:27.258892 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 12 18:18:27.258907 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 12 18:18:27.258922 kernel: ACPI: Early table checksum verification disabled Dec 12 18:18:27.258938 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Dec 12 18:18:27.258953 kernel: ACPI: RSDT 0x000000007FFE19FD 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:18:27.258967 kernel: ACPI: FACP 0x000000007FFE17E1 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:18:27.258993 kernel: ACPI: DSDT 0x000000007FFE0040 0017A1 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:18:27.259013 kernel: ACPI: FACS 0x000000007FFE0000 000040 Dec 12 18:18:27.259032 kernel: ACPI: APIC 0x000000007FFE1855 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:18:27.259047 kernel: ACPI: HPET 0x000000007FFE18D5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:18:27.259061 kernel: ACPI: SRAT 0x000000007FFE190D 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:18:27.259075 kernel: ACPI: WAET 0x000000007FFE19D5 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:18:27.259091 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe17e1-0x7ffe1854] Dec 12 18:18:27.259118 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe17e0] Dec 12 18:18:27.259132 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Dec 12 18:18:27.259147 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe1855-0x7ffe18d4] Dec 12 18:18:27.259174 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe18d5-0x7ffe190c] Dec 12 18:18:27.259189 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe190d-0x7ffe19d4] Dec 12 18:18:27.259203 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe19d5-0x7ffe19fc] Dec 12 18:18:27.259228 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 12 18:18:27.259279 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Dec 12 18:18:27.259296 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00001000-0x7ffdafff] Dec 12 18:18:27.259313 kernel: NODE_DATA(0) allocated [mem 0x7ffd3dc0-0x7ffdafff] Dec 12 18:18:27.259331 kernel: Zone ranges: Dec 12 18:18:27.259347 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 12 18:18:27.259374 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Dec 12 18:18:27.259392 kernel: Normal empty Dec 12 18:18:27.259407 kernel: Device empty Dec 12 18:18:27.259423 kernel: Movable zone start for each node Dec 12 18:18:27.259438 kernel: Early memory node ranges Dec 12 18:18:27.259453 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 12 18:18:27.259467 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Dec 12 18:18:27.259492 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Dec 12 18:18:27.259509 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 12 18:18:27.259525 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 12 18:18:27.259541 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Dec 12 18:18:27.259558 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 12 18:18:27.259580 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 12 18:18:27.259597 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 12 18:18:27.259617 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 12 18:18:27.259643 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 12 18:18:27.259659 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 12 18:18:27.259680 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 12 18:18:27.259697 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 12 18:18:27.259712 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 12 18:18:27.259728 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 12 18:18:27.259745 kernel: TSC deadline timer available Dec 12 18:18:27.259771 kernel: CPU topo: Max. logical packages: 1 Dec 12 18:18:27.259788 kernel: CPU topo: Max. logical dies: 1 Dec 12 18:18:27.259805 kernel: CPU topo: Max. dies per package: 1 Dec 12 18:18:27.259822 kernel: CPU topo: Max. threads per core: 1 Dec 12 18:18:27.259839 kernel: CPU topo: Num. cores per package: 2 Dec 12 18:18:27.259856 kernel: CPU topo: Num. threads per package: 2 Dec 12 18:18:27.259873 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Dec 12 18:18:27.259898 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 12 18:18:27.259914 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Dec 12 18:18:27.259929 kernel: Booting paravirtualized kernel on KVM Dec 12 18:18:27.259944 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 12 18:18:27.259961 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 12 18:18:27.259978 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Dec 12 18:18:27.259995 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Dec 12 18:18:27.260021 kernel: pcpu-alloc: [0] 0 1 Dec 12 18:18:27.260038 kernel: kvm-guest: PV spinlocks disabled, no host support Dec 12 18:18:27.260057 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=4dd8de2ff094d97322e7371b16ddee5fc8348868bcdd9ec7bcd11ea9d3933fee Dec 12 18:18:27.260075 kernel: random: crng init done Dec 12 18:18:27.260091 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 12 18:18:27.260109 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 12 18:18:27.260125 kernel: Fallback order for Node 0: 0 Dec 12 18:18:27.260149 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524153 Dec 12 18:18:27.260164 kernel: Policy zone: DMA32 Dec 12 18:18:27.260180 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 12 18:18:27.260197 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 12 18:18:27.260214 kernel: Kernel/User page tables isolation: enabled Dec 12 18:18:27.260231 kernel: ftrace: allocating 40103 entries in 157 pages Dec 12 18:18:27.262325 kernel: ftrace: allocated 157 pages with 5 groups Dec 12 18:18:27.262377 kernel: Dynamic Preempt: voluntary Dec 12 18:18:27.262394 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 12 18:18:27.262414 kernel: rcu: RCU event tracing is enabled. Dec 12 18:18:27.262431 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 12 18:18:27.262446 kernel: Trampoline variant of Tasks RCU enabled. Dec 12 18:18:27.262462 kernel: Rude variant of Tasks RCU enabled. Dec 12 18:18:27.262478 kernel: Tracing variant of Tasks RCU enabled. Dec 12 18:18:27.262495 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 12 18:18:27.262522 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 12 18:18:27.262539 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 12 18:18:27.262560 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 12 18:18:27.262577 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 12 18:18:27.262594 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 12 18:18:27.262611 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 12 18:18:27.262629 kernel: Console: colour VGA+ 80x25 Dec 12 18:18:27.262654 kernel: printk: legacy console [tty0] enabled Dec 12 18:18:27.262671 kernel: printk: legacy console [ttyS0] enabled Dec 12 18:18:27.262689 kernel: ACPI: Core revision 20240827 Dec 12 18:18:27.262706 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 12 18:18:27.262748 kernel: APIC: Switch to symmetric I/O mode setup Dec 12 18:18:27.262773 kernel: x2apic enabled Dec 12 18:18:27.262790 kernel: APIC: Switched APIC routing to: physical x2apic Dec 12 18:18:27.262808 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 12 18:18:27.262827 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x21134f58f0d, max_idle_ns: 440795217993 ns Dec 12 18:18:27.262856 kernel: Calibrating delay loop (skipped) preset value.. 4589.21 BogoMIPS (lpj=2294608) Dec 12 18:18:27.262874 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Dec 12 18:18:27.262892 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Dec 12 18:18:27.262911 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 12 18:18:27.262937 kernel: Spectre V2 : Mitigation: Retpolines Dec 12 18:18:27.262955 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Dec 12 18:18:27.262973 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Dec 12 18:18:27.262991 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 12 18:18:27.263009 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 12 18:18:27.263027 kernel: MDS: Mitigation: Clear CPU buffers Dec 12 18:18:27.263045 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 12 18:18:27.263071 kernel: active return thunk: its_return_thunk Dec 12 18:18:27.263088 kernel: ITS: Mitigation: Aligned branch/return thunks Dec 12 18:18:27.263106 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 12 18:18:27.263124 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 12 18:18:27.263142 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 12 18:18:27.263160 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 12 18:18:27.263178 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 12 18:18:27.263204 kernel: Freeing SMP alternatives memory: 32K Dec 12 18:18:27.263222 kernel: pid_max: default: 32768 minimum: 301 Dec 12 18:18:27.263259 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 12 18:18:27.263277 kernel: landlock: Up and running. Dec 12 18:18:27.263295 kernel: SELinux: Initializing. Dec 12 18:18:27.263313 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 12 18:18:27.263331 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 12 18:18:27.263356 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Dec 12 18:18:27.263375 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Dec 12 18:18:27.263393 kernel: signal: max sigframe size: 1776 Dec 12 18:18:27.263411 kernel: rcu: Hierarchical SRCU implementation. Dec 12 18:18:27.263429 kernel: rcu: Max phase no-delay instances is 400. Dec 12 18:18:27.263447 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 12 18:18:27.263466 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 12 18:18:27.263492 kernel: smp: Bringing up secondary CPUs ... Dec 12 18:18:27.263512 kernel: smpboot: x86: Booting SMP configuration: Dec 12 18:18:27.263530 kernel: .... node #0, CPUs: #1 Dec 12 18:18:27.263548 kernel: smp: Brought up 1 node, 2 CPUs Dec 12 18:18:27.263566 kernel: smpboot: Total of 2 processors activated (9178.43 BogoMIPS) Dec 12 18:18:27.263585 kernel: Memory: 1985336K/2096612K available (14336K kernel code, 2444K rwdata, 29892K rodata, 15464K init, 2576K bss, 106712K reserved, 0K cma-reserved) Dec 12 18:18:27.263603 kernel: devtmpfs: initialized Dec 12 18:18:27.263629 kernel: x86/mm: Memory block size: 128MB Dec 12 18:18:27.263647 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 12 18:18:27.263664 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 12 18:18:27.263682 kernel: pinctrl core: initialized pinctrl subsystem Dec 12 18:18:27.263700 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 12 18:18:27.263718 kernel: audit: initializing netlink subsys (disabled) Dec 12 18:18:27.263736 kernel: audit: type=2000 audit(1765563503.507:1): state=initialized audit_enabled=0 res=1 Dec 12 18:18:27.263762 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 12 18:18:27.263780 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 12 18:18:27.263797 kernel: cpuidle: using governor menu Dec 12 18:18:27.263815 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 12 18:18:27.263833 kernel: dca service started, version 1.12.1 Dec 12 18:18:27.263851 kernel: PCI: Using configuration type 1 for base access Dec 12 18:18:27.263870 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 12 18:18:27.263888 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 12 18:18:27.263913 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 12 18:18:27.263931 kernel: ACPI: Added _OSI(Module Device) Dec 12 18:18:27.263949 kernel: ACPI: Added _OSI(Processor Device) Dec 12 18:18:27.263967 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 12 18:18:27.263985 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 12 18:18:27.264003 kernel: ACPI: Interpreter enabled Dec 12 18:18:27.264020 kernel: ACPI: PM: (supports S0 S5) Dec 12 18:18:27.264046 kernel: ACPI: Using IOAPIC for interrupt routing Dec 12 18:18:27.264064 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 12 18:18:27.264082 kernel: PCI: Using E820 reservations for host bridge windows Dec 12 18:18:27.264100 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Dec 12 18:18:27.264118 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 12 18:18:27.266570 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 12 18:18:27.266829 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Dec 12 18:18:27.267046 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Dec 12 18:18:27.267070 kernel: acpiphp: Slot [3] registered Dec 12 18:18:27.267089 kernel: acpiphp: Slot [4] registered Dec 12 18:18:27.267108 kernel: acpiphp: Slot [5] registered Dec 12 18:18:27.267126 kernel: acpiphp: Slot [6] registered Dec 12 18:18:27.267144 kernel: acpiphp: Slot [7] registered Dec 12 18:18:27.267174 kernel: acpiphp: Slot [8] registered Dec 12 18:18:27.267192 kernel: acpiphp: Slot [9] registered Dec 12 18:18:27.267210 kernel: acpiphp: Slot [10] registered Dec 12 18:18:27.267228 kernel: acpiphp: Slot [11] registered Dec 12 18:18:27.267266 kernel: acpiphp: Slot [12] registered Dec 12 18:18:27.267284 kernel: acpiphp: Slot [13] registered Dec 12 18:18:27.267302 kernel: acpiphp: Slot [14] registered Dec 12 18:18:27.267329 kernel: acpiphp: Slot [15] registered Dec 12 18:18:27.267347 kernel: acpiphp: Slot [16] registered Dec 12 18:18:27.267365 kernel: acpiphp: Slot [17] registered Dec 12 18:18:27.267383 kernel: acpiphp: Slot [18] registered Dec 12 18:18:27.267401 kernel: acpiphp: Slot [19] registered Dec 12 18:18:27.267419 kernel: acpiphp: Slot [20] registered Dec 12 18:18:27.267438 kernel: acpiphp: Slot [21] registered Dec 12 18:18:27.267463 kernel: acpiphp: Slot [22] registered Dec 12 18:18:27.267481 kernel: acpiphp: Slot [23] registered Dec 12 18:18:27.267499 kernel: acpiphp: Slot [24] registered Dec 12 18:18:27.267517 kernel: acpiphp: Slot [25] registered Dec 12 18:18:27.267535 kernel: acpiphp: Slot [26] registered Dec 12 18:18:27.267552 kernel: acpiphp: Slot [27] registered Dec 12 18:18:27.267570 kernel: acpiphp: Slot [28] registered Dec 12 18:18:27.267588 kernel: acpiphp: Slot [29] registered Dec 12 18:18:27.267613 kernel: acpiphp: Slot [30] registered Dec 12 18:18:27.267631 kernel: acpiphp: Slot [31] registered Dec 12 18:18:27.267649 kernel: PCI host bridge to bus 0000:00 Dec 12 18:18:27.267885 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 12 18:18:27.268092 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 12 18:18:27.270421 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 12 18:18:27.270695 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Dec 12 18:18:27.270906 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Dec 12 18:18:27.271105 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 12 18:18:27.271390 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Dec 12 18:18:27.271633 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Dec 12 18:18:27.271885 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint Dec 12 18:18:27.272145 kernel: pci 0000:00:01.1: BAR 4 [io 0xc1e0-0xc1ef] Dec 12 18:18:27.274494 kernel: pci 0000:00:01.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Dec 12 18:18:27.274757 kernel: pci 0000:00:01.1: BAR 1 [io 0x03f6]: legacy IDE quirk Dec 12 18:18:27.274998 kernel: pci 0000:00:01.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Dec 12 18:18:27.275231 kernel: pci 0000:00:01.1: BAR 3 [io 0x0376]: legacy IDE quirk Dec 12 18:18:27.275528 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint Dec 12 18:18:27.275794 kernel: pci 0000:00:01.2: BAR 4 [io 0xc180-0xc19f] Dec 12 18:18:27.276025 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Dec 12 18:18:27.278864 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Dec 12 18:18:27.279166 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Dec 12 18:18:27.279452 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Dec 12 18:18:27.279669 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref] Dec 12 18:18:27.279883 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref] Dec 12 18:18:27.280095 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfebf0000-0xfebf0fff] Dec 12 18:18:27.280325 kernel: pci 0000:00:02.0: ROM [mem 0xfebe0000-0xfebeffff pref] Dec 12 18:18:27.280540 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 12 18:18:27.280785 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Dec 12 18:18:27.281000 kernel: pci 0000:00:03.0: BAR 0 [io 0xc1a0-0xc1bf] Dec 12 18:18:27.281213 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebf1000-0xfebf1fff] Dec 12 18:18:27.283279 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref] Dec 12 18:18:27.283558 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Dec 12 18:18:27.283834 kernel: pci 0000:00:04.0: BAR 0 [io 0xc1c0-0xc1df] Dec 12 18:18:27.284072 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebf2000-0xfebf2fff] Dec 12 18:18:27.284404 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref] Dec 12 18:18:27.284642 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Dec 12 18:18:27.284864 kernel: pci 0000:00:05.0: BAR 0 [io 0xc100-0xc13f] Dec 12 18:18:27.285081 kernel: pci 0000:00:05.0: BAR 1 [mem 0xfebf3000-0xfebf3fff] Dec 12 18:18:27.285391 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref] Dec 12 18:18:27.285636 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Dec 12 18:18:27.285861 kernel: pci 0000:00:06.0: BAR 0 [io 0xc000-0xc07f] Dec 12 18:18:27.286110 kernel: pci 0000:00:06.0: BAR 1 [mem 0xfebf4000-0xfebf4fff] Dec 12 18:18:27.286356 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref] Dec 12 18:18:27.286595 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Dec 12 18:18:27.286853 kernel: pci 0000:00:07.0: BAR 0 [io 0xc080-0xc0ff] Dec 12 18:18:27.287077 kernel: pci 0000:00:07.0: BAR 1 [mem 0xfebf5000-0xfebf5fff] Dec 12 18:18:27.287328 kernel: pci 0000:00:07.0: BAR 4 [mem 0xfe814000-0xfe817fff 64bit pref] Dec 12 18:18:27.287562 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint Dec 12 18:18:27.287794 kernel: pci 0000:00:08.0: BAR 0 [io 0xc140-0xc17f] Dec 12 18:18:27.288040 kernel: pci 0000:00:08.0: BAR 4 [mem 0xfe818000-0xfe81bfff 64bit pref] Dec 12 18:18:27.288065 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 12 18:18:27.288084 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 12 18:18:27.288101 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 12 18:18:27.288117 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 12 18:18:27.288135 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 12 18:18:27.288151 kernel: iommu: Default domain type: Translated Dec 12 18:18:27.288184 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 12 18:18:27.288201 kernel: PCI: Using ACPI for IRQ routing Dec 12 18:18:27.288219 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 12 18:18:27.288259 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 12 18:18:27.288276 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Dec 12 18:18:27.288525 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Dec 12 18:18:27.288753 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Dec 12 18:18:27.288993 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 12 18:18:27.289014 kernel: vgaarb: loaded Dec 12 18:18:27.289032 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 12 18:18:27.289052 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 12 18:18:27.289071 kernel: clocksource: Switched to clocksource kvm-clock Dec 12 18:18:27.289089 kernel: VFS: Disk quotas dquot_6.6.0 Dec 12 18:18:27.289105 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 12 18:18:27.289138 kernel: pnp: PnP ACPI init Dec 12 18:18:27.289156 kernel: pnp: PnP ACPI: found 4 devices Dec 12 18:18:27.289174 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 12 18:18:27.289191 kernel: NET: Registered PF_INET protocol family Dec 12 18:18:27.289209 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 12 18:18:27.289227 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 12 18:18:27.289263 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 12 18:18:27.289292 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 12 18:18:27.289310 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 12 18:18:27.289328 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 12 18:18:27.289347 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 12 18:18:27.289365 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 12 18:18:27.289382 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 12 18:18:27.289401 kernel: NET: Registered PF_XDP protocol family Dec 12 18:18:27.289628 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 12 18:18:27.289825 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 12 18:18:27.290017 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 12 18:18:27.290233 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Dec 12 18:18:27.290463 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Dec 12 18:18:27.290699 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Dec 12 18:18:27.290950 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 12 18:18:27.290977 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Dec 12 18:18:27.291194 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x720 took 39944 usecs Dec 12 18:18:27.291218 kernel: PCI: CLS 0 bytes, default 64 Dec 12 18:18:27.291237 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 12 18:18:27.291274 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x21134f58f0d, max_idle_ns: 440795217993 ns Dec 12 18:18:27.291292 kernel: Initialise system trusted keyrings Dec 12 18:18:27.291326 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 12 18:18:27.291344 kernel: Key type asymmetric registered Dec 12 18:18:27.291361 kernel: Asymmetric key parser 'x509' registered Dec 12 18:18:27.291379 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 12 18:18:27.291397 kernel: io scheduler mq-deadline registered Dec 12 18:18:27.291415 kernel: io scheduler kyber registered Dec 12 18:18:27.291432 kernel: io scheduler bfq registered Dec 12 18:18:27.291459 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 12 18:18:27.291478 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Dec 12 18:18:27.291496 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Dec 12 18:18:27.291513 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Dec 12 18:18:27.291531 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 12 18:18:27.291548 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 12 18:18:27.291565 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 12 18:18:27.291591 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 12 18:18:27.291609 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 12 18:18:27.291627 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 12 18:18:27.291867 kernel: rtc_cmos 00:03: RTC can wake from S4 Dec 12 18:18:27.292070 kernel: rtc_cmos 00:03: registered as rtc0 Dec 12 18:18:27.292289 kernel: rtc_cmos 00:03: setting system clock to 2025-12-12T18:18:25 UTC (1765563505) Dec 12 18:18:27.292501 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Dec 12 18:18:27.292524 kernel: intel_pstate: CPU model not supported Dec 12 18:18:27.292542 kernel: NET: Registered PF_INET6 protocol family Dec 12 18:18:27.292561 kernel: Segment Routing with IPv6 Dec 12 18:18:27.292579 kernel: In-situ OAM (IOAM) with IPv6 Dec 12 18:18:27.292597 kernel: NET: Registered PF_PACKET protocol family Dec 12 18:18:27.292616 kernel: Key type dns_resolver registered Dec 12 18:18:27.292644 kernel: IPI shorthand broadcast: enabled Dec 12 18:18:27.292662 kernel: sched_clock: Marking stable (2348004617, 279942640)->(2702353331, -74406074) Dec 12 18:18:27.292680 kernel: registered taskstats version 1 Dec 12 18:18:27.292699 kernel: Loading compiled-in X.509 certificates Dec 12 18:18:27.292717 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: b90706f42f055ab9f35fc8fc29156d877adb12c4' Dec 12 18:18:27.292735 kernel: Demotion targets for Node 0: null Dec 12 18:18:27.292753 kernel: Key type .fscrypt registered Dec 12 18:18:27.292770 kernel: Key type fscrypt-provisioning registered Dec 12 18:18:27.292838 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 12 18:18:27.292864 kernel: ima: Allocated hash algorithm: sha1 Dec 12 18:18:27.292883 kernel: ima: No architecture policies found Dec 12 18:18:27.292902 kernel: clk: Disabling unused clocks Dec 12 18:18:27.292921 kernel: Freeing unused kernel image (initmem) memory: 15464K Dec 12 18:18:27.292946 kernel: Write protecting the kernel read-only data: 45056k Dec 12 18:18:27.292965 kernel: Freeing unused kernel image (rodata/data gap) memory: 828K Dec 12 18:18:27.292991 kernel: Run /init as init process Dec 12 18:18:27.293010 kernel: with arguments: Dec 12 18:18:27.293029 kernel: /init Dec 12 18:18:27.293047 kernel: with environment: Dec 12 18:18:27.293065 kernel: HOME=/ Dec 12 18:18:27.293083 kernel: TERM=linux Dec 12 18:18:27.293101 kernel: SCSI subsystem initialized Dec 12 18:18:27.293128 kernel: libata version 3.00 loaded. Dec 12 18:18:27.293371 kernel: ata_piix 0000:00:01.1: version 2.13 Dec 12 18:18:27.293616 kernel: scsi host0: ata_piix Dec 12 18:18:27.293838 kernel: scsi host1: ata_piix Dec 12 18:18:27.293864 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 lpm-pol 0 Dec 12 18:18:27.293883 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 lpm-pol 0 Dec 12 18:18:27.293916 kernel: ACPI: bus type USB registered Dec 12 18:18:27.293935 kernel: usbcore: registered new interface driver usbfs Dec 12 18:18:27.293953 kernel: usbcore: registered new interface driver hub Dec 12 18:18:27.293972 kernel: usbcore: registered new device driver usb Dec 12 18:18:27.294203 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Dec 12 18:18:27.294433 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Dec 12 18:18:27.294642 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Dec 12 18:18:27.294870 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Dec 12 18:18:27.295112 kernel: hub 1-0:1.0: USB hub found Dec 12 18:18:27.295356 kernel: hub 1-0:1.0: 2 ports detected Dec 12 18:18:27.295606 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Dec 12 18:18:27.295813 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Dec 12 18:18:27.295838 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 12 18:18:27.295857 kernel: GPT:16515071 != 125829119 Dec 12 18:18:27.295876 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 12 18:18:27.295901 kernel: GPT:16515071 != 125829119 Dec 12 18:18:27.295935 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 12 18:18:27.295955 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 12 18:18:27.296208 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Dec 12 18:18:27.296436 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Dec 12 18:18:27.296650 kernel: virtio_scsi virtio3: 2/0/0 default/read/poll queues Dec 12 18:18:27.296871 kernel: scsi host2: Virtio SCSI HBA Dec 12 18:18:27.296911 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 12 18:18:27.296931 kernel: device-mapper: uevent: version 1.0.3 Dec 12 18:18:27.296950 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 12 18:18:27.296969 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Dec 12 18:18:27.296988 kernel: raid6: avx2x4 gen() 16401 MB/s Dec 12 18:18:27.297007 kernel: raid6: avx2x2 gen() 16751 MB/s Dec 12 18:18:27.297033 kernel: raid6: avx2x1 gen() 13038 MB/s Dec 12 18:18:27.297053 kernel: raid6: using algorithm avx2x2 gen() 16751 MB/s Dec 12 18:18:27.297073 kernel: raid6: .... xor() 18910 MB/s, rmw enabled Dec 12 18:18:27.297092 kernel: raid6: using avx2x2 recovery algorithm Dec 12 18:18:27.297112 kernel: xor: automatically using best checksumming function avx Dec 12 18:18:27.297131 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 12 18:18:27.297150 kernel: BTRFS: device fsid ea73a94a-fb20-4d45-8448-4c6f4c422a4f devid 1 transid 35 /dev/mapper/usr (253:0) scanned by mount (160) Dec 12 18:18:27.297169 kernel: BTRFS info (device dm-0): first mount of filesystem ea73a94a-fb20-4d45-8448-4c6f4c422a4f Dec 12 18:18:27.297196 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:18:27.297216 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 12 18:18:27.297235 kernel: BTRFS info (device dm-0): enabling free space tree Dec 12 18:18:27.297267 kernel: loop: module loaded Dec 12 18:18:27.297282 kernel: loop0: detected capacity change from 0 to 100136 Dec 12 18:18:27.297298 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 12 18:18:27.297327 systemd[1]: Successfully made /usr/ read-only. Dec 12 18:18:27.297353 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 18:18:27.297374 systemd[1]: Detected virtualization kvm. Dec 12 18:18:27.297392 systemd[1]: Detected architecture x86-64. Dec 12 18:18:27.297409 systemd[1]: Running in initrd. Dec 12 18:18:27.297468 systemd[1]: No hostname configured, using default hostname. Dec 12 18:18:27.297501 systemd[1]: Hostname set to . Dec 12 18:18:27.297519 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 12 18:18:27.297535 systemd[1]: Queued start job for default target initrd.target. Dec 12 18:18:27.297552 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 12 18:18:27.297570 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 18:18:27.297588 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 18:18:27.297609 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 12 18:18:27.297638 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 18:18:27.297658 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 12 18:18:27.297677 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 12 18:18:27.297697 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 18:18:27.297717 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 18:18:27.297745 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 12 18:18:27.297762 systemd[1]: Reached target paths.target - Path Units. Dec 12 18:18:27.297779 systemd[1]: Reached target slices.target - Slice Units. Dec 12 18:18:27.297795 systemd[1]: Reached target swap.target - Swaps. Dec 12 18:18:27.297812 systemd[1]: Reached target timers.target - Timer Units. Dec 12 18:18:27.297829 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 18:18:27.297846 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 18:18:27.297878 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 12 18:18:27.297897 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 12 18:18:27.297915 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 12 18:18:27.297934 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 18:18:27.297954 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 18:18:27.297974 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 18:18:27.297992 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 18:18:27.298021 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 12 18:18:27.298058 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 12 18:18:27.298077 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 18:18:27.298097 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 12 18:18:27.298117 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 12 18:18:27.298136 systemd[1]: Starting systemd-fsck-usr.service... Dec 12 18:18:27.298154 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 18:18:27.298187 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 18:18:27.298206 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:18:27.298226 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 12 18:18:27.298283 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 18:18:27.298302 systemd[1]: Finished systemd-fsck-usr.service. Dec 12 18:18:27.298322 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 12 18:18:27.298406 systemd-journald[295]: Collecting audit messages is enabled. Dec 12 18:18:27.298461 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 12 18:18:27.298480 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 18:18:27.298499 kernel: Bridge firewalling registered Dec 12 18:18:27.298519 systemd-journald[295]: Journal started Dec 12 18:18:27.298555 systemd-journald[295]: Runtime Journal (/run/log/journal/5144897bcb514144bd7c4bb61e84b088) is 4.8M, max 39.1M, 34.2M free. Dec 12 18:18:27.281983 systemd-modules-load[298]: Inserted module 'br_netfilter' Dec 12 18:18:27.366967 kernel: audit: type=1130 audit(1765563507.356:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:27.367013 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 18:18:27.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:27.367000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:27.368469 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 18:18:27.382614 kernel: audit: type=1130 audit(1765563507.367:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:27.382658 kernel: audit: type=1130 audit(1765563507.375:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:27.375000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:27.375992 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:18:27.383000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:27.388762 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 12 18:18:27.392224 kernel: audit: type=1130 audit(1765563507.383:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:27.394511 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 18:18:27.398591 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 18:18:27.404201 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 18:18:27.425754 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 18:18:27.427000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:27.435277 kernel: audit: type=1130 audit(1765563507.427:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:27.435862 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 18:18:27.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:27.442439 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 18:18:27.447489 kernel: audit: type=1130 audit(1765563507.437:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:27.449342 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 18:18:27.449838 systemd-tmpfiles[319]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 12 18:18:27.465629 kernel: audit: type=1334 audit(1765563507.439:8): prog-id=6 op=LOAD Dec 12 18:18:27.465683 kernel: audit: type=1130 audit(1765563507.456:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:27.439000 audit: BPF prog-id=6 op=LOAD Dec 12 18:18:27.456000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:27.458939 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 18:18:27.476039 kernel: audit: type=1130 audit(1765563507.466:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:27.466000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:27.474519 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 12 18:18:27.510304 dracut-cmdline[339]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=4dd8de2ff094d97322e7371b16ddee5fc8348868bcdd9ec7bcd11ea9d3933fee Dec 12 18:18:27.553613 systemd-resolved[335]: Positive Trust Anchors: Dec 12 18:18:27.553634 systemd-resolved[335]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 18:18:27.553644 systemd-resolved[335]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 12 18:18:27.553747 systemd-resolved[335]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 18:18:27.600089 systemd-resolved[335]: Defaulting to hostname 'linux'. Dec 12 18:18:27.602822 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 18:18:27.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:27.604556 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 18:18:27.653327 kernel: Loading iSCSI transport class v2.0-870. Dec 12 18:18:27.673296 kernel: iscsi: registered transport (tcp) Dec 12 18:18:27.708115 kernel: iscsi: registered transport (qla4xxx) Dec 12 18:18:27.708263 kernel: QLogic iSCSI HBA Driver Dec 12 18:18:27.750857 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 18:18:27.771903 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 18:18:27.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:27.775834 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 18:18:27.849725 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 12 18:18:27.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:27.854494 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 12 18:18:27.856410 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 12 18:18:27.910390 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 12 18:18:27.911000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:27.911000 audit: BPF prog-id=7 op=LOAD Dec 12 18:18:27.911000 audit: BPF prog-id=8 op=LOAD Dec 12 18:18:27.913200 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 18:18:27.950221 systemd-udevd[578]: Using default interface naming scheme 'v257'. Dec 12 18:18:27.969207 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 18:18:27.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:27.973393 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 12 18:18:28.014845 dracut-pre-trigger[643]: rd.md=0: removing MD RAID activation Dec 12 18:18:28.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:28.014598 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 18:18:28.016000 audit: BPF prog-id=9 op=LOAD Dec 12 18:18:28.018583 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 18:18:28.060832 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 18:18:28.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:28.064963 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 18:18:28.086547 systemd-networkd[687]: lo: Link UP Dec 12 18:18:28.087580 systemd-networkd[687]: lo: Gained carrier Dec 12 18:18:28.089111 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 18:18:28.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:28.090150 systemd[1]: Reached target network.target - Network. Dec 12 18:18:28.176986 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 18:18:28.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:28.180447 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 12 18:18:28.331354 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 12 18:18:28.372594 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 12 18:18:28.386614 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 12 18:18:28.402912 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 12 18:18:28.405967 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 12 18:18:28.433793 disk-uuid[743]: Primary Header is updated. Dec 12 18:18:28.433793 disk-uuid[743]: Secondary Entries is updated. Dec 12 18:18:28.433793 disk-uuid[743]: Secondary Header is updated. Dec 12 18:18:28.451282 kernel: cryptd: max_cpu_qlen set to 1000 Dec 12 18:18:28.483288 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Dec 12 18:18:28.500621 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 18:18:28.539937 kernel: kauditd_printk_skb: 12 callbacks suppressed Dec 12 18:18:28.539978 kernel: audit: type=1131 audit(1765563508.501:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:28.540027 kernel: AES CTR mode by8 optimization enabled Dec 12 18:18:28.501000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:28.500856 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:18:28.501736 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:18:28.519905 systemd-networkd[687]: eth1: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 12 18:18:28.519914 systemd-networkd[687]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 18:18:28.524301 systemd-networkd[687]: eth1: Link UP Dec 12 18:18:28.525047 systemd-networkd[687]: eth1: Gained carrier Dec 12 18:18:28.525069 systemd-networkd[687]: eth1: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 12 18:18:28.543517 systemd-networkd[687]: eth1: DHCPv4 address 10.124.0.31/20 acquired from 169.254.169.253 Dec 12 18:18:28.548812 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:18:28.585786 systemd-networkd[687]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/yy-digitalocean.network Dec 12 18:18:28.588337 systemd-networkd[687]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Dec 12 18:18:28.590350 systemd-networkd[687]: eth0: Link UP Dec 12 18:18:28.593395 systemd-networkd[687]: eth0: Gained carrier Dec 12 18:18:28.593416 systemd-networkd[687]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/yy-digitalocean.network Dec 12 18:18:28.608355 systemd-networkd[687]: eth0: DHCPv4 address 64.23.253.31/20, gateway 64.23.240.1 acquired from 169.254.169.253 Dec 12 18:18:28.738400 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 12 18:18:28.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:28.767592 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 18:18:28.776846 kernel: audit: type=1130 audit(1765563508.766:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:28.773994 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 18:18:28.775018 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 18:18:28.778495 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 12 18:18:28.782443 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:18:28.783000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:28.791295 kernel: audit: type=1130 audit(1765563508.783:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:28.827263 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 12 18:18:28.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:28.836476 kernel: audit: type=1130 audit(1765563508.827:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:29.544272 disk-uuid[744]: Warning: The kernel is still using the old partition table. Dec 12 18:18:29.544272 disk-uuid[744]: The new table will be used at the next reboot or after you Dec 12 18:18:29.544272 disk-uuid[744]: run partprobe(8) or kpartx(8) Dec 12 18:18:29.544272 disk-uuid[744]: The operation has completed successfully. Dec 12 18:18:29.553410 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 12 18:18:29.553580 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 12 18:18:29.568862 kernel: audit: type=1130 audit(1765563509.554:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:29.568941 kernel: audit: type=1131 audit(1765563509.554:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:29.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:29.554000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:29.557448 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 12 18:18:29.606285 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (834) Dec 12 18:18:29.612668 kernel: BTRFS info (device vda6): first mount of filesystem c87e2a2e-b8fc-4d1d-98f3-593ea9a0f098 Dec 12 18:18:29.612750 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:18:29.619623 kernel: BTRFS info (device vda6): turning on async discard Dec 12 18:18:29.619720 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 18:18:29.630316 kernel: BTRFS info (device vda6): last unmount of filesystem c87e2a2e-b8fc-4d1d-98f3-593ea9a0f098 Dec 12 18:18:29.630787 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 12 18:18:29.632000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:29.636566 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 12 18:18:29.642100 kernel: audit: type=1130 audit(1765563509.632:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:29.872887 ignition[853]: Ignition 2.22.0 Dec 12 18:18:29.872909 ignition[853]: Stage: fetch-offline Dec 12 18:18:29.873227 ignition[853]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:18:29.875452 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 18:18:29.891689 kernel: audit: type=1130 audit(1765563509.876:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:29.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:29.873277 ignition[853]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 12 18:18:29.879459 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 12 18:18:29.873422 ignition[853]: parsed url from cmdline: "" Dec 12 18:18:29.873427 ignition[853]: no config URL provided Dec 12 18:18:29.873434 ignition[853]: reading system config file "/usr/lib/ignition/user.ign" Dec 12 18:18:29.873446 ignition[853]: no config at "/usr/lib/ignition/user.ign" Dec 12 18:18:29.873452 ignition[853]: failed to fetch config: resource requires networking Dec 12 18:18:29.873937 ignition[853]: Ignition finished successfully Dec 12 18:18:29.939922 ignition[863]: Ignition 2.22.0 Dec 12 18:18:29.939943 ignition[863]: Stage: fetch Dec 12 18:18:29.940287 ignition[863]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:18:29.940302 ignition[863]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 12 18:18:29.940521 ignition[863]: parsed url from cmdline: "" Dec 12 18:18:29.940525 ignition[863]: no config URL provided Dec 12 18:18:29.940532 ignition[863]: reading system config file "/usr/lib/ignition/user.ign" Dec 12 18:18:29.940541 ignition[863]: no config at "/usr/lib/ignition/user.ign" Dec 12 18:18:29.940570 ignition[863]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Dec 12 18:18:29.979877 ignition[863]: GET result: OK Dec 12 18:18:29.980891 ignition[863]: parsing config with SHA512: 926255e1cb4b5dd58d0b4611a98810edacbde7889f8a3d72ca24204e82f5be2ade7cf851ca91ac81e6ac89d0cb2ab8a0e2f15b3783deb1576b1679d1ba018195 Dec 12 18:18:29.991915 unknown[863]: fetched base config from "system" Dec 12 18:18:29.992870 unknown[863]: fetched base config from "system" Dec 12 18:18:29.992887 unknown[863]: fetched user config from "digitalocean" Dec 12 18:18:29.995809 ignition[863]: fetch: fetch complete Dec 12 18:18:29.995822 ignition[863]: fetch: fetch passed Dec 12 18:18:29.995905 ignition[863]: Ignition finished successfully Dec 12 18:18:29.998755 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 12 18:18:30.007139 kernel: audit: type=1130 audit(1765563509.999:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:29.999000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:30.002510 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 12 18:18:30.058764 ignition[871]: Ignition 2.22.0 Dec 12 18:18:30.058778 ignition[871]: Stage: kargs Dec 12 18:18:30.058955 ignition[871]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:18:30.058966 ignition[871]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 12 18:18:30.060835 ignition[871]: kargs: kargs passed Dec 12 18:18:30.064082 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 12 18:18:30.073796 kernel: audit: type=1130 audit(1765563510.065:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:30.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:30.060935 ignition[871]: Ignition finished successfully Dec 12 18:18:30.067455 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 12 18:18:30.136615 ignition[878]: Ignition 2.22.0 Dec 12 18:18:30.136634 ignition[878]: Stage: disks Dec 12 18:18:30.136903 ignition[878]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:18:30.136917 ignition[878]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 12 18:18:30.138324 ignition[878]: disks: disks passed Dec 12 18:18:30.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:30.140340 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 12 18:18:30.138410 ignition[878]: Ignition finished successfully Dec 12 18:18:30.142612 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 12 18:18:30.143972 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 12 18:18:30.145555 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 18:18:30.147423 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 18:18:30.149156 systemd[1]: Reached target basic.target - Basic System. Dec 12 18:18:30.153508 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 12 18:18:30.159404 systemd-networkd[687]: eth1: Gained IPv6LL Dec 12 18:18:30.194992 systemd-fsck[886]: ROOT: clean, 15/456736 files, 38230/456704 blocks Dec 12 18:18:30.199661 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 12 18:18:30.200000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:30.202904 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 12 18:18:30.223408 systemd-networkd[687]: eth0: Gained IPv6LL Dec 12 18:18:30.371311 kernel: EXT4-fs (vda9): mounted filesystem 7cac6192-738c-43cc-9341-24f71d091e91 r/w with ordered data mode. Quota mode: none. Dec 12 18:18:30.371801 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 12 18:18:30.373664 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 12 18:18:30.377072 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 18:18:30.381380 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 12 18:18:30.388498 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... Dec 12 18:18:30.393489 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 12 18:18:30.394457 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 12 18:18:30.394508 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 18:18:30.401081 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 12 18:18:30.407502 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 12 18:18:30.413294 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (894) Dec 12 18:18:30.419617 kernel: BTRFS info (device vda6): first mount of filesystem c87e2a2e-b8fc-4d1d-98f3-593ea9a0f098 Dec 12 18:18:30.423296 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:18:30.448150 kernel: BTRFS info (device vda6): turning on async discard Dec 12 18:18:30.448236 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 18:18:30.452568 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 18:18:30.549571 coreos-metadata[896]: Dec 12 18:18:30.549 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Dec 12 18:18:30.554677 coreos-metadata[897]: Dec 12 18:18:30.554 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Dec 12 18:18:30.558296 initrd-setup-root[924]: cut: /sysroot/etc/passwd: No such file or directory Dec 12 18:18:30.563978 initrd-setup-root[931]: cut: /sysroot/etc/group: No such file or directory Dec 12 18:18:30.565408 coreos-metadata[897]: Dec 12 18:18:30.564 INFO Fetch successful Dec 12 18:18:30.567497 coreos-metadata[896]: Dec 12 18:18:30.566 INFO Fetch successful Dec 12 18:18:30.574471 coreos-metadata[897]: Dec 12 18:18:30.574 INFO wrote hostname ci-4515.1.0-f-8be9c60ab1 to /sysroot/etc/hostname Dec 12 18:18:30.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:30.580852 initrd-setup-root[938]: cut: /sysroot/etc/shadow: No such file or directory Dec 12 18:18:30.577607 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 12 18:18:30.588049 initrd-setup-root[946]: cut: /sysroot/etc/gshadow: No such file or directory Dec 12 18:18:30.590363 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. Dec 12 18:18:30.590549 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. Dec 12 18:18:30.592000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-afterburn-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:30.592000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-afterburn-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:30.731085 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 12 18:18:30.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:30.734226 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 12 18:18:30.737285 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 12 18:18:30.758308 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 12 18:18:30.761032 kernel: BTRFS info (device vda6): last unmount of filesystem c87e2a2e-b8fc-4d1d-98f3-593ea9a0f098 Dec 12 18:18:30.786312 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 12 18:18:30.787000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:30.809149 ignition[1016]: INFO : Ignition 2.22.0 Dec 12 18:18:30.809149 ignition[1016]: INFO : Stage: mount Dec 12 18:18:30.812467 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 18:18:30.812467 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 12 18:18:30.812467 ignition[1016]: INFO : mount: mount passed Dec 12 18:18:30.812467 ignition[1016]: INFO : Ignition finished successfully Dec 12 18:18:30.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:30.813850 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 12 18:18:30.818448 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 12 18:18:30.848148 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 18:18:30.886282 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1026) Dec 12 18:18:30.890301 kernel: BTRFS info (device vda6): first mount of filesystem c87e2a2e-b8fc-4d1d-98f3-593ea9a0f098 Dec 12 18:18:30.890391 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:18:30.899757 kernel: BTRFS info (device vda6): turning on async discard Dec 12 18:18:30.899843 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 18:18:30.902996 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 18:18:30.955334 ignition[1043]: INFO : Ignition 2.22.0 Dec 12 18:18:30.955334 ignition[1043]: INFO : Stage: files Dec 12 18:18:30.957303 ignition[1043]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 18:18:30.957303 ignition[1043]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 12 18:18:30.957303 ignition[1043]: DEBUG : files: compiled without relabeling support, skipping Dec 12 18:18:30.960308 ignition[1043]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 12 18:18:30.960308 ignition[1043]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 12 18:18:30.962921 ignition[1043]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 12 18:18:30.964207 ignition[1043]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 12 18:18:30.964207 ignition[1043]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 12 18:18:30.963441 unknown[1043]: wrote ssh authorized keys file for user: core Dec 12 18:18:30.967678 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Dec 12 18:18:30.967678 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Dec 12 18:18:31.132271 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 12 18:18:31.259743 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Dec 12 18:18:31.259743 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 12 18:18:31.259743 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 12 18:18:31.259743 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 12 18:18:31.259743 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 12 18:18:31.259743 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 18:18:31.259743 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 18:18:31.259743 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 18:18:31.259743 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 18:18:31.273769 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 18:18:31.273769 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 18:18:31.273769 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 12 18:18:31.273769 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 12 18:18:31.273769 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 12 18:18:31.273769 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Dec 12 18:18:31.697743 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 12 18:18:32.240451 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 12 18:18:32.242427 ignition[1043]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 12 18:18:32.243348 ignition[1043]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 18:18:32.246990 ignition[1043]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 18:18:32.246990 ignition[1043]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 12 18:18:32.246990 ignition[1043]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 12 18:18:32.251644 ignition[1043]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 12 18:18:32.251644 ignition[1043]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 12 18:18:32.251644 ignition[1043]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 12 18:18:32.251644 ignition[1043]: INFO : files: files passed Dec 12 18:18:32.251644 ignition[1043]: INFO : Ignition finished successfully Dec 12 18:18:32.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:32.250526 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 12 18:18:32.255515 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 12 18:18:32.258450 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 12 18:18:32.276879 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 12 18:18:32.277480 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 12 18:18:32.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:32.280000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:32.291671 initrd-setup-root-after-ignition[1074]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 18:18:32.293480 initrd-setup-root-after-ignition[1074]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 12 18:18:32.296322 initrd-setup-root-after-ignition[1077]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 18:18:32.297405 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 18:18:32.298000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:32.299757 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 12 18:18:32.302336 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 12 18:18:32.372499 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 12 18:18:32.372645 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 12 18:18:32.374000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:32.374000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:32.374782 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 12 18:18:32.376374 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 12 18:18:32.378400 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 12 18:18:32.381529 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 12 18:18:32.429554 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 18:18:32.430000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:32.433113 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 12 18:18:32.475151 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 12 18:18:32.477278 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 12 18:18:32.479604 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 18:18:32.481123 systemd[1]: Stopped target timers.target - Timer Units. Dec 12 18:18:32.483257 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 12 18:18:32.484000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:32.483542 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 18:18:32.485576 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 12 18:18:32.487718 systemd[1]: Stopped target basic.target - Basic System. Dec 12 18:18:32.489381 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 12 18:18:32.491136 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 18:18:32.492987 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 12 18:18:32.495182 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 12 18:18:32.497176 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 12 18:18:32.498810 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 18:18:32.500621 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 12 18:18:32.502596 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 12 18:18:32.511913 systemd[1]: Stopped target swap.target - Swaps. Dec 12 18:18:32.512961 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 12 18:18:32.514000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:32.513209 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 12 18:18:32.514916 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 12 18:18:32.516142 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 18:18:32.517730 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 12 18:18:32.519409 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 18:18:32.522000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:32.520694 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 12 18:18:32.520962 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 12 18:18:32.525000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:32.523056 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 12 18:18:32.527000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:32.523378 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 18:18:32.528000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:32.525440 systemd[1]: ignition-files.service: Deactivated successfully. Dec 12 18:18:32.525665 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 12 18:18:32.527405 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 12 18:18:32.527673 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 12 18:18:32.531382 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 12 18:18:32.535853 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 12 18:18:32.541000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:32.538183 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 12 18:18:32.544000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:32.539506 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 18:18:32.545000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:32.541937 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 12 18:18:32.542179 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 18:18:32.544547 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 12 18:18:32.544800 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 18:18:32.558862 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 12 18:18:32.561814 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 12 18:18:32.565000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:32.565000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:32.594088 ignition[1098]: INFO : Ignition 2.22.0 Dec 12 18:18:32.596371 ignition[1098]: INFO : Stage: umount Dec 12 18:18:32.596371 ignition[1098]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 18:18:32.596371 ignition[1098]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 12 18:18:32.595032 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 12 18:18:32.602000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:32.603789 ignition[1098]: INFO : umount: umount passed Dec 12 18:18:32.603789 ignition[1098]: INFO : Ignition finished successfully Dec 12 18:18:32.604000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:32.601393 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 12 18:18:32.606000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:32.601605 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 12 18:18:32.608000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:32.603488 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 12 18:18:32.609000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:32.603658 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 12 18:18:32.605833 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 12 18:18:32.612000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:32.605918 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 12 18:18:32.607343 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 12 18:18:32.607426 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 12 18:18:32.608768 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 12 18:18:32.608840 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 12 18:18:32.610309 systemd[1]: Stopped target network.target - Network. Dec 12 18:18:32.611501 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 12 18:18:32.611585 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 18:18:32.612924 systemd[1]: Stopped target paths.target - Path Units. Dec 12 18:18:32.614353 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 12 18:18:32.618334 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 18:18:32.619961 systemd[1]: Stopped target slices.target - Slice Units. Dec 12 18:18:32.621471 systemd[1]: Stopped target sockets.target - Socket Units. Dec 12 18:18:32.629000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:32.623427 systemd[1]: iscsid.socket: Deactivated successfully. Dec 12 18:18:32.631000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:32.623495 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 18:18:32.632000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:32.624847 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 12 18:18:32.624908 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 18:18:32.626784 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Dec 12 18:18:32.626837 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Dec 12 18:18:32.628524 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 12 18:18:32.628623 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 12 18:18:32.630303 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 12 18:18:32.630395 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 12 18:18:32.631837 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 12 18:18:32.631929 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 12 18:18:32.633384 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 12 18:18:32.648000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:32.635013 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 12 18:18:32.646546 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 12 18:18:32.652000 audit: BPF prog-id=6 op=UNLOAD Dec 12 18:18:32.646737 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 12 18:18:32.652000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:32.651419 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 12 18:18:32.651620 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 12 18:18:32.656000 audit: BPF prog-id=9 op=UNLOAD Dec 12 18:18:32.656715 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 12 18:18:32.657700 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 12 18:18:32.657762 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 12 18:18:32.662385 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 12 18:18:32.663164 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 12 18:18:32.665000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:32.666000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:32.663293 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 18:18:32.665865 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 12 18:18:32.666090 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 12 18:18:32.671000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:32.666926 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 12 18:18:32.667003 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 12 18:18:32.671462 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 18:18:32.687632 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 12 18:18:32.688751 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 18:18:32.690000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:32.691409 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 12 18:18:32.691471 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 12 18:18:32.692463 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 12 18:18:32.697000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:32.698000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:32.692529 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 18:18:32.695695 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 12 18:18:32.700000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:32.695802 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 12 18:18:32.697717 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 12 18:18:32.712000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:32.697821 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 12 18:18:32.714000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:32.699101 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 12 18:18:32.699194 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 18:18:32.702948 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 12 18:18:32.718000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:32.711171 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 12 18:18:32.720000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:32.711388 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 18:18:32.713100 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 12 18:18:32.723000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:32.713184 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 18:18:32.714655 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 12 18:18:32.714729 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 18:18:32.718459 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 12 18:18:32.718547 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 18:18:32.722372 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 18:18:32.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:32.732000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:32.722455 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:18:32.727594 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 12 18:18:32.730645 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 12 18:18:32.740660 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 12 18:18:32.741526 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 12 18:18:32.742000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:32.742984 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 12 18:18:32.745568 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 12 18:18:32.770533 systemd[1]: Switching root. Dec 12 18:18:32.822993 systemd-journald[295]: Journal stopped Dec 12 18:18:34.692579 systemd-journald[295]: Received SIGTERM from PID 1 (systemd). Dec 12 18:18:34.692663 kernel: SELinux: policy capability network_peer_controls=1 Dec 12 18:18:34.692685 kernel: SELinux: policy capability open_perms=1 Dec 12 18:18:34.692700 kernel: SELinux: policy capability extended_socket_class=1 Dec 12 18:18:34.692721 kernel: SELinux: policy capability always_check_network=0 Dec 12 18:18:34.692735 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 12 18:18:34.692777 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 12 18:18:34.692809 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 12 18:18:34.692836 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 12 18:18:34.692862 kernel: SELinux: policy capability userspace_initial_context=0 Dec 12 18:18:34.692890 systemd[1]: Successfully loaded SELinux policy in 87.380ms. Dec 12 18:18:34.692925 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 11.463ms. Dec 12 18:18:34.692947 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 18:18:34.692980 systemd[1]: Detected virtualization kvm. Dec 12 18:18:34.693001 systemd[1]: Detected architecture x86-64. Dec 12 18:18:34.693020 systemd[1]: Detected first boot. Dec 12 18:18:34.693040 systemd[1]: Hostname set to . Dec 12 18:18:34.693072 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 12 18:18:34.693089 zram_generator::config[1142]: No configuration found. Dec 12 18:18:34.693114 kernel: Guest personality initialized and is inactive Dec 12 18:18:34.693128 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Dec 12 18:18:34.693148 kernel: Initialized host personality Dec 12 18:18:34.693161 kernel: NET: Registered PF_VSOCK protocol family Dec 12 18:18:34.693178 systemd[1]: Populated /etc with preset unit settings. Dec 12 18:18:34.693193 kernel: kauditd_printk_skb: 60 callbacks suppressed Dec 12 18:18:34.693208 kernel: audit: type=1334 audit(1765563514.194:93): prog-id=12 op=LOAD Dec 12 18:18:34.693227 kernel: audit: type=1334 audit(1765563514.194:94): prog-id=3 op=UNLOAD Dec 12 18:18:34.693255 kernel: audit: type=1334 audit(1765563514.197:95): prog-id=13 op=LOAD Dec 12 18:18:34.693269 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 12 18:18:34.693283 kernel: audit: type=1334 audit(1765563514.197:96): prog-id=14 op=LOAD Dec 12 18:18:34.693296 kernel: audit: type=1334 audit(1765563514.197:97): prog-id=4 op=UNLOAD Dec 12 18:18:34.693310 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 12 18:18:34.693323 kernel: audit: type=1334 audit(1765563514.197:98): prog-id=5 op=UNLOAD Dec 12 18:18:34.693345 kernel: audit: type=1131 audit(1765563514.199:99): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:34.693358 kernel: audit: type=1334 audit(1765563514.216:100): prog-id=12 op=UNLOAD Dec 12 18:18:34.693372 kernel: audit: type=1130 audit(1765563514.219:101): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:34.693385 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 12 18:18:34.693399 kernel: audit: type=1131 audit(1765563514.219:102): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:34.693422 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 12 18:18:34.693443 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 12 18:18:34.693458 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 12 18:18:34.693471 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 12 18:18:34.693486 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 12 18:18:34.693500 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 12 18:18:34.693523 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 12 18:18:34.693539 systemd[1]: Created slice user.slice - User and Session Slice. Dec 12 18:18:34.693553 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 18:18:34.693568 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 18:18:34.693583 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 12 18:18:34.693598 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 12 18:18:34.693613 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 12 18:18:34.693637 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 18:18:34.693652 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 12 18:18:34.693666 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 18:18:34.693679 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 18:18:34.693693 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 12 18:18:34.693707 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 12 18:18:34.693721 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 12 18:18:34.693743 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 12 18:18:34.693757 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 18:18:34.693771 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 18:18:34.693785 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Dec 12 18:18:34.693800 systemd[1]: Reached target slices.target - Slice Units. Dec 12 18:18:34.693814 systemd[1]: Reached target swap.target - Swaps. Dec 12 18:18:34.693828 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 12 18:18:34.693848 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 12 18:18:34.693864 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 12 18:18:34.693879 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 12 18:18:34.693894 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Dec 12 18:18:34.693907 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 18:18:34.693939 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Dec 12 18:18:34.693974 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Dec 12 18:18:34.693998 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 18:18:34.694012 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 18:18:34.694027 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 12 18:18:34.694041 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 12 18:18:34.694056 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 12 18:18:34.694070 systemd[1]: Mounting media.mount - External Media Directory... Dec 12 18:18:34.694085 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:18:34.694105 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 12 18:18:34.694119 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 12 18:18:34.694132 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 12 18:18:34.694146 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 12 18:18:34.694161 systemd[1]: Reached target machines.target - Containers. Dec 12 18:18:34.694175 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 12 18:18:34.694189 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:18:34.694210 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 18:18:34.694225 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 12 18:18:34.701292 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 18:18:34.701364 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 18:18:34.701395 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 18:18:34.701425 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 12 18:18:34.701455 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 18:18:34.701520 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 12 18:18:34.701558 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 12 18:18:34.701588 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 12 18:18:34.701617 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 12 18:18:34.701654 systemd[1]: Stopped systemd-fsck-usr.service. Dec 12 18:18:34.701685 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:18:34.701738 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 18:18:34.701768 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 18:18:34.703279 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 18:18:34.703310 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 12 18:18:34.703342 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 12 18:18:34.703357 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 18:18:34.703381 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:18:34.703396 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 12 18:18:34.703410 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 12 18:18:34.703427 systemd[1]: Mounted media.mount - External Media Directory. Dec 12 18:18:34.703442 kernel: fuse: init (API version 7.41) Dec 12 18:18:34.703458 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 12 18:18:34.703473 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 12 18:18:34.703495 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 12 18:18:34.703511 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 18:18:34.703526 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 12 18:18:34.703540 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 12 18:18:34.703555 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 18:18:34.703569 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 18:18:34.703590 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 18:18:34.703611 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 18:18:34.703625 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 18:18:34.703678 systemd-journald[1217]: Collecting audit messages is enabled. Dec 12 18:18:34.703706 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 18:18:34.703721 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 12 18:18:34.703736 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 12 18:18:34.703758 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 18:18:34.703773 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 18:18:34.703789 systemd-journald[1217]: Journal started Dec 12 18:18:34.703815 systemd-journald[1217]: Runtime Journal (/run/log/journal/5144897bcb514144bd7c4bb61e84b088) is 4.8M, max 39.1M, 34.2M free. Dec 12 18:18:34.336000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 12 18:18:34.515000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:34.524000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:34.534000 audit: BPF prog-id=14 op=UNLOAD Dec 12 18:18:34.534000 audit: BPF prog-id=13 op=UNLOAD Dec 12 18:18:34.538000 audit: BPF prog-id=15 op=LOAD Dec 12 18:18:34.538000 audit: BPF prog-id=16 op=LOAD Dec 12 18:18:34.538000 audit: BPF prog-id=17 op=LOAD Dec 12 18:18:34.640000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:34.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:34.661000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:34.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:34.708403 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 18:18:34.670000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:34.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:34.678000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:34.683000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 12 18:18:34.683000 audit[1217]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7fffc8fb78c0 a2=4000 a3=0 items=0 ppid=1 pid=1217 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:18:34.683000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 12 18:18:34.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:34.688000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:34.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:34.695000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:34.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:34.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:34.173517 systemd[1]: Queued start job for default target multi-user.target. Dec 12 18:18:34.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:34.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:34.198781 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 12 18:18:34.199954 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 12 18:18:34.711000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:34.709646 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 12 18:18:34.710927 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 12 18:18:34.728671 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 18:18:34.731193 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Dec 12 18:18:34.732077 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 12 18:18:34.732123 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 18:18:34.735341 kernel: ACPI: bus type drm_connector registered Dec 12 18:18:34.736696 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 12 18:18:34.740631 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:18:34.740882 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 12 18:18:34.745496 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 12 18:18:34.751563 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 12 18:18:34.754551 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 18:18:34.762184 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 12 18:18:34.763041 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 18:18:34.767085 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 18:18:34.775503 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 12 18:18:34.780592 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 12 18:18:34.784091 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 12 18:18:34.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:34.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:34.790000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:34.789319 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 18:18:34.789956 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 18:18:34.805473 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 12 18:18:34.814870 systemd-journald[1217]: Time spent on flushing to /var/log/journal/5144897bcb514144bd7c4bb61e84b088 is 101.456ms for 1146 entries. Dec 12 18:18:34.814870 systemd-journald[1217]: System Journal (/var/log/journal/5144897bcb514144bd7c4bb61e84b088) is 8M, max 163.5M, 155.5M free. Dec 12 18:18:34.945827 systemd-journald[1217]: Received client request to flush runtime journal. Dec 12 18:18:34.947402 kernel: loop1: detected capacity change from 0 to 8 Dec 12 18:18:34.947457 kernel: loop2: detected capacity change from 0 to 111544 Dec 12 18:18:34.947475 kernel: loop3: detected capacity change from 0 to 224512 Dec 12 18:18:34.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:34.879000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:34.896000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:34.903000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:34.817644 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 12 18:18:34.823605 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 12 18:18:34.878574 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 18:18:34.891972 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. Dec 12 18:18:34.891998 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. Dec 12 18:18:34.895903 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 12 18:18:34.903209 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 18:18:34.906517 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 12 18:18:34.954189 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 12 18:18:34.955000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:34.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:34.970169 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 18:18:34.985278 kernel: loop4: detected capacity change from 0 to 119256 Dec 12 18:18:35.002874 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 12 18:18:35.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:35.005000 audit: BPF prog-id=18 op=LOAD Dec 12 18:18:35.005000 audit: BPF prog-id=19 op=LOAD Dec 12 18:18:35.006000 audit: BPF prog-id=20 op=LOAD Dec 12 18:18:35.008501 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Dec 12 18:18:35.010000 audit: BPF prog-id=21 op=LOAD Dec 12 18:18:35.012542 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 18:18:35.017740 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 18:18:35.034283 kernel: loop5: detected capacity change from 0 to 8 Dec 12 18:18:35.049000 audit: BPF prog-id=22 op=LOAD Dec 12 18:18:35.049000 audit: BPF prog-id=23 op=LOAD Dec 12 18:18:35.049000 audit: BPF prog-id=24 op=LOAD Dec 12 18:18:35.051321 kernel: loop6: detected capacity change from 0 to 111544 Dec 12 18:18:35.052552 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Dec 12 18:18:35.056000 audit: BPF prog-id=25 op=LOAD Dec 12 18:18:35.056000 audit: BPF prog-id=26 op=LOAD Dec 12 18:18:35.056000 audit: BPF prog-id=27 op=LOAD Dec 12 18:18:35.058135 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 12 18:18:35.067069 systemd-tmpfiles[1289]: ACLs are not supported, ignoring. Dec 12 18:18:35.067100 systemd-tmpfiles[1289]: ACLs are not supported, ignoring. Dec 12 18:18:35.079279 kernel: loop7: detected capacity change from 0 to 224512 Dec 12 18:18:35.080568 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 18:18:35.081000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:35.102724 kernel: loop1: detected capacity change from 0 to 119256 Dec 12 18:18:35.116748 (sd-merge)[1290]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-digitalocean.raw'. Dec 12 18:18:35.129490 (sd-merge)[1290]: Merged extensions into '/usr'. Dec 12 18:18:35.144455 systemd[1]: Reload requested from client PID 1264 ('systemd-sysext') (unit systemd-sysext.service)... Dec 12 18:18:35.144482 systemd[1]: Reloading... Dec 12 18:18:35.197498 systemd-nsresourced[1292]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Dec 12 18:18:35.337266 zram_generator::config[1336]: No configuration found. Dec 12 18:18:35.374875 systemd-resolved[1288]: Positive Trust Anchors: Dec 12 18:18:35.374904 systemd-resolved[1288]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 18:18:35.374914 systemd-resolved[1288]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 12 18:18:35.374985 systemd-resolved[1288]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 18:18:35.409940 systemd-resolved[1288]: Using system hostname 'ci-4515.1.0-f-8be9c60ab1'. Dec 12 18:18:35.438085 systemd-oomd[1287]: No swap; memory pressure usage will be degraded Dec 12 18:18:35.780208 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 12 18:18:35.781160 systemd[1]: Reloading finished in 635 ms. Dec 12 18:18:35.797169 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 12 18:18:35.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:35.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:35.798441 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Dec 12 18:18:35.799805 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 18:18:35.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:35.801309 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Dec 12 18:18:35.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:35.802815 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 12 18:18:35.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:35.808107 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 18:18:35.811323 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 12 18:18:35.817575 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 12 18:18:35.827462 systemd[1]: Starting ensure-sysext.service... Dec 12 18:18:35.835792 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 18:18:35.841000 audit: BPF prog-id=28 op=LOAD Dec 12 18:18:35.841000 audit: BPF prog-id=15 op=UNLOAD Dec 12 18:18:35.841000 audit: BPF prog-id=29 op=LOAD Dec 12 18:18:35.841000 audit: BPF prog-id=30 op=LOAD Dec 12 18:18:35.841000 audit: BPF prog-id=16 op=UNLOAD Dec 12 18:18:35.841000 audit: BPF prog-id=17 op=UNLOAD Dec 12 18:18:35.842000 audit: BPF prog-id=31 op=LOAD Dec 12 18:18:35.842000 audit: BPF prog-id=25 op=UNLOAD Dec 12 18:18:35.842000 audit: BPF prog-id=32 op=LOAD Dec 12 18:18:35.842000 audit: BPF prog-id=33 op=LOAD Dec 12 18:18:35.842000 audit: BPF prog-id=26 op=UNLOAD Dec 12 18:18:35.842000 audit: BPF prog-id=27 op=UNLOAD Dec 12 18:18:35.843000 audit: BPF prog-id=34 op=LOAD Dec 12 18:18:35.843000 audit: BPF prog-id=18 op=UNLOAD Dec 12 18:18:35.843000 audit: BPF prog-id=35 op=LOAD Dec 12 18:18:35.843000 audit: BPF prog-id=36 op=LOAD Dec 12 18:18:35.843000 audit: BPF prog-id=19 op=UNLOAD Dec 12 18:18:35.843000 audit: BPF prog-id=20 op=UNLOAD Dec 12 18:18:35.844000 audit: BPF prog-id=37 op=LOAD Dec 12 18:18:35.844000 audit: BPF prog-id=21 op=UNLOAD Dec 12 18:18:35.847000 audit: BPF prog-id=38 op=LOAD Dec 12 18:18:35.851000 audit: BPF prog-id=22 op=UNLOAD Dec 12 18:18:35.851000 audit: BPF prog-id=39 op=LOAD Dec 12 18:18:35.851000 audit: BPF prog-id=40 op=LOAD Dec 12 18:18:35.851000 audit: BPF prog-id=23 op=UNLOAD Dec 12 18:18:35.851000 audit: BPF prog-id=24 op=UNLOAD Dec 12 18:18:35.858189 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 12 18:18:35.860262 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 12 18:18:35.877531 systemd[1]: Reload requested from client PID 1380 ('systemctl') (unit ensure-sysext.service)... Dec 12 18:18:35.877562 systemd[1]: Reloading... Dec 12 18:18:35.883296 systemd-tmpfiles[1381]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 12 18:18:35.883795 systemd-tmpfiles[1381]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 12 18:18:35.884468 systemd-tmpfiles[1381]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 12 18:18:35.887313 systemd-tmpfiles[1381]: ACLs are not supported, ignoring. Dec 12 18:18:35.887613 systemd-tmpfiles[1381]: ACLs are not supported, ignoring. Dec 12 18:18:35.898971 systemd-tmpfiles[1381]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 18:18:35.899269 systemd-tmpfiles[1381]: Skipping /boot Dec 12 18:18:35.918397 systemd-tmpfiles[1381]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 18:18:35.918564 systemd-tmpfiles[1381]: Skipping /boot Dec 12 18:18:36.046270 zram_generator::config[1415]: No configuration found. Dec 12 18:18:36.408879 systemd[1]: Reloading finished in 530 ms. Dec 12 18:18:36.424290 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 18:18:36.425000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:36.431000 audit: BPF prog-id=41 op=LOAD Dec 12 18:18:36.432000 audit: BPF prog-id=28 op=UNLOAD Dec 12 18:18:36.432000 audit: BPF prog-id=42 op=LOAD Dec 12 18:18:36.432000 audit: BPF prog-id=43 op=LOAD Dec 12 18:18:36.432000 audit: BPF prog-id=29 op=UNLOAD Dec 12 18:18:36.432000 audit: BPF prog-id=30 op=UNLOAD Dec 12 18:18:36.433000 audit: BPF prog-id=44 op=LOAD Dec 12 18:18:36.433000 audit: BPF prog-id=34 op=UNLOAD Dec 12 18:18:36.433000 audit: BPF prog-id=45 op=LOAD Dec 12 18:18:36.433000 audit: BPF prog-id=46 op=LOAD Dec 12 18:18:36.433000 audit: BPF prog-id=35 op=UNLOAD Dec 12 18:18:36.433000 audit: BPF prog-id=36 op=UNLOAD Dec 12 18:18:36.434000 audit: BPF prog-id=47 op=LOAD Dec 12 18:18:36.434000 audit: BPF prog-id=31 op=UNLOAD Dec 12 18:18:36.434000 audit: BPF prog-id=48 op=LOAD Dec 12 18:18:36.437000 audit: BPF prog-id=49 op=LOAD Dec 12 18:18:36.437000 audit: BPF prog-id=32 op=UNLOAD Dec 12 18:18:36.437000 audit: BPF prog-id=33 op=UNLOAD Dec 12 18:18:36.439000 audit: BPF prog-id=50 op=LOAD Dec 12 18:18:36.439000 audit: BPF prog-id=38 op=UNLOAD Dec 12 18:18:36.439000 audit: BPF prog-id=51 op=LOAD Dec 12 18:18:36.439000 audit: BPF prog-id=52 op=LOAD Dec 12 18:18:36.439000 audit: BPF prog-id=39 op=UNLOAD Dec 12 18:18:36.439000 audit: BPF prog-id=40 op=UNLOAD Dec 12 18:18:36.442000 audit: BPF prog-id=53 op=LOAD Dec 12 18:18:36.442000 audit: BPF prog-id=37 op=UNLOAD Dec 12 18:18:36.461515 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 18:18:36.467480 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 12 18:18:36.475839 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 12 18:18:36.484215 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 12 18:18:36.487159 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 12 18:18:36.489606 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 12 18:18:36.491000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:36.498000 audit: BPF prog-id=8 op=UNLOAD Dec 12 18:18:36.498000 audit: BPF prog-id=7 op=UNLOAD Dec 12 18:18:36.500000 audit: BPF prog-id=54 op=LOAD Dec 12 18:18:36.500000 audit: BPF prog-id=55 op=LOAD Dec 12 18:18:36.505149 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 18:18:36.514788 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:18:36.515022 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:18:36.516662 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 18:18:36.521741 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 18:18:36.525554 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 18:18:36.526535 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:18:36.526876 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 12 18:18:36.527048 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:18:36.527213 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:18:36.534023 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:18:36.534513 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:18:36.534832 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:18:36.535104 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 12 18:18:36.536314 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:18:36.536543 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:18:36.542904 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:18:36.543598 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:18:36.546497 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 18:18:36.547383 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:18:36.547578 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 12 18:18:36.547673 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:18:36.547810 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:18:36.565000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:36.564908 systemd[1]: Finished ensure-sysext.service. Dec 12 18:18:36.567000 audit: BPF prog-id=56 op=LOAD Dec 12 18:18:36.570711 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 12 18:18:36.572000 audit[1469]: SYSTEM_BOOT pid=1469 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 12 18:18:36.592000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:36.597000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:36.597000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:36.600000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:36.600000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:36.591232 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 12 18:18:36.594822 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 18:18:36.595174 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 18:18:36.598951 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 18:18:36.600315 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 18:18:36.601486 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 18:18:36.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:36.606000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:36.604003 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 18:18:36.604301 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 18:18:36.631090 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 18:18:36.632201 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 18:18:36.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:36.635000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:36.635941 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 18:18:36.650719 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 12 18:18:36.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:36.685409 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 12 18:18:36.686000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:36.687136 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 12 18:18:36.694435 systemd-udevd[1471]: Using default interface naming scheme 'v257'. Dec 12 18:18:36.706000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 12 18:18:36.706000 audit[1500]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc815f3700 a2=420 a3=0 items=0 ppid=1459 pid=1500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:18:36.706000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 12 18:18:36.707532 augenrules[1500]: No rules Dec 12 18:18:36.708722 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 18:18:36.709156 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 18:18:36.758942 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 18:18:36.768292 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 18:18:36.850725 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 12 18:18:36.853225 systemd[1]: Reached target time-set.target - System Time Set. Dec 12 18:18:37.065564 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 12 18:18:37.074364 systemd[1]: Condition check resulted in dev-disk-by\x2dlabel-config\x2d2.device - /dev/disk/by-label/config-2 being skipped. Dec 12 18:18:37.076019 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Dec 12 18:18:37.076880 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:18:37.077035 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:18:37.079559 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 18:18:37.089743 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 18:18:37.096558 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 18:18:37.097485 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:18:37.097718 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 12 18:18:37.097772 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:18:37.097825 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 12 18:18:37.097851 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:18:37.111835 systemd-networkd[1510]: lo: Link UP Dec 12 18:18:37.111845 systemd-networkd[1510]: lo: Gained carrier Dec 12 18:18:37.122856 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 18:18:37.124551 systemd-networkd[1510]: eth1: Configuring with /run/systemd/network/10-d2:f8:f1:2a:5a:2c.network. Dec 12 18:18:37.126004 systemd[1]: Reached target network.target - Network. Dec 12 18:18:37.130230 systemd-networkd[1510]: eth1: Link UP Dec 12 18:18:37.135941 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 12 18:18:37.140207 systemd-networkd[1510]: eth1: Gained carrier Dec 12 18:18:37.140854 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 12 18:18:37.144961 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 18:18:37.145425 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 18:18:37.149734 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 18:18:37.156314 systemd-timesyncd[1479]: Network configuration changed, trying to establish connection. Dec 12 18:18:37.195169 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 12 18:18:37.199994 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 18:18:37.201851 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 18:18:37.213013 kernel: ISO 9660 Extensions: RRIP_1991A Dec 12 18:18:37.209494 systemd-networkd[1510]: eth0: Configuring with /run/systemd/network/10-5e:cd:e6:af:a4:87.network. Dec 12 18:18:37.214210 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Dec 12 18:18:37.219507 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 12 18:18:37.224173 systemd-networkd[1510]: eth0: Link UP Dec 12 18:18:37.224192 systemd-timesyncd[1479]: Network configuration changed, trying to establish connection. Dec 12 18:18:37.227373 systemd-timesyncd[1479]: Network configuration changed, trying to establish connection. Dec 12 18:18:37.228074 systemd-networkd[1510]: eth0: Gained carrier Dec 12 18:18:37.233594 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 18:18:37.237857 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 18:18:37.239781 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 18:18:37.241364 systemd-timesyncd[1479]: Network configuration changed, trying to establish connection. Dec 12 18:18:37.246879 systemd-timesyncd[1479]: Network configuration changed, trying to establish connection. Dec 12 18:18:37.247275 kernel: mousedev: PS/2 mouse device common for all mice Dec 12 18:18:37.257633 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 12 18:18:37.288499 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 12 18:18:37.335604 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Dec 12 18:18:37.351225 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 12 18:18:37.364287 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 12 18:18:37.404271 kernel: ACPI: button: Power Button [PWRF] Dec 12 18:18:37.515124 ldconfig[1461]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 12 18:18:37.526898 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 12 18:18:37.532547 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 12 18:18:37.573876 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 12 18:18:37.574921 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 18:18:37.575827 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 12 18:18:37.577852 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 12 18:18:37.578901 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Dec 12 18:18:37.580967 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 12 18:18:37.582738 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 12 18:18:37.584500 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Dec 12 18:18:37.586489 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Dec 12 18:18:37.587293 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 12 18:18:37.589339 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 12 18:18:37.589390 systemd[1]: Reached target paths.target - Path Units. Dec 12 18:18:37.590807 systemd[1]: Reached target timers.target - Timer Units. Dec 12 18:18:37.593395 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 12 18:18:37.597028 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 12 18:18:37.606738 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 12 18:18:37.618644 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Dec 12 18:18:37.618732 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Dec 12 18:18:37.619236 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 12 18:18:37.626311 kernel: Console: switching to colour dummy device 80x25 Dec 12 18:18:37.630229 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Dec 12 18:18:37.630326 kernel: [drm] features: -context_init Dec 12 18:18:37.627848 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 12 18:18:37.634299 kernel: [drm] number of scanouts: 1 Dec 12 18:18:37.637490 kernel: [drm] number of cap sets: 0 Dec 12 18:18:37.637572 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0 Dec 12 18:18:37.637945 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 12 18:18:37.638508 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 12 18:18:37.640174 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 12 18:18:37.648105 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 18:18:37.648524 systemd[1]: Reached target basic.target - Basic System. Dec 12 18:18:37.648665 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 12 18:18:37.648697 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 12 18:18:37.651478 systemd[1]: Starting containerd.service - containerd container runtime... Dec 12 18:18:37.656513 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 12 18:18:37.660524 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Dec 12 18:18:37.660611 kernel: Console: switching to colour frame buffer device 128x48 Dec 12 18:18:37.661488 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 12 18:18:37.670272 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Dec 12 18:18:37.691611 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 12 18:18:37.699495 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 12 18:18:37.703552 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 12 18:18:37.705131 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 12 18:18:37.710812 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Dec 12 18:18:37.718515 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 12 18:18:37.726517 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 12 18:18:37.728736 jq[1577]: false Dec 12 18:18:37.733700 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 12 18:18:37.743695 extend-filesystems[1578]: Found /dev/vda6 Dec 12 18:18:37.746358 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 12 18:18:37.752964 extend-filesystems[1578]: Found /dev/vda9 Dec 12 18:18:37.758596 extend-filesystems[1578]: Checking size of /dev/vda9 Dec 12 18:18:37.759958 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 12 18:18:37.760996 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 12 18:18:37.771284 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 12 18:18:37.779017 systemd[1]: Starting update-engine.service - Update Engine... Dec 12 18:18:37.785896 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 12 18:18:37.819113 update_engine[1593]: I20251212 18:18:37.817670 1593 main.cc:92] Flatcar Update Engine starting Dec 12 18:18:37.803339 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 12 18:18:37.803960 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 12 18:18:37.819919 jq[1594]: true Dec 12 18:18:37.804627 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 12 18:18:37.848602 jq[1597]: true Dec 12 18:18:37.870377 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:18:37.876177 oslogin_cache_refresh[1579]: Refreshing passwd entry cache Dec 12 18:18:37.876774 google_oslogin_nss_cache[1579]: oslogin_cache_refresh[1579]: Refreshing passwd entry cache Dec 12 18:18:37.896732 oslogin_cache_refresh[1579]: Failure getting users, quitting Dec 12 18:18:37.900619 google_oslogin_nss_cache[1579]: oslogin_cache_refresh[1579]: Failure getting users, quitting Dec 12 18:18:37.900619 google_oslogin_nss_cache[1579]: oslogin_cache_refresh[1579]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 12 18:18:37.900619 google_oslogin_nss_cache[1579]: oslogin_cache_refresh[1579]: Refreshing group entry cache Dec 12 18:18:37.900619 google_oslogin_nss_cache[1579]: oslogin_cache_refresh[1579]: Failure getting groups, quitting Dec 12 18:18:37.900619 google_oslogin_nss_cache[1579]: oslogin_cache_refresh[1579]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 12 18:18:37.896754 oslogin_cache_refresh[1579]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 12 18:18:37.896812 oslogin_cache_refresh[1579]: Refreshing group entry cache Dec 12 18:18:37.899456 oslogin_cache_refresh[1579]: Failure getting groups, quitting Dec 12 18:18:37.899472 oslogin_cache_refresh[1579]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 12 18:18:37.904918 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Dec 12 18:18:37.905443 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Dec 12 18:18:37.911653 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 12 18:18:37.911962 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 12 18:18:37.943965 extend-filesystems[1578]: Resized partition /dev/vda9 Dec 12 18:18:37.949733 systemd[1]: motdgen.service: Deactivated successfully. Dec 12 18:18:37.951011 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 12 18:18:37.962512 extend-filesystems[1632]: resize2fs 1.47.3 (8-Jul-2025) Dec 12 18:18:37.983923 coreos-metadata[1574]: Dec 12 18:18:37.981 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Dec 12 18:18:37.991779 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 14138363 blocks Dec 12 18:18:38.002824 coreos-metadata[1574]: Dec 12 18:18:37.998 INFO Fetch successful Dec 12 18:18:38.002956 tar[1612]: linux-amd64/LICENSE Dec 12 18:18:38.002956 tar[1612]: linux-amd64/helm Dec 12 18:18:38.012778 dbus-daemon[1575]: [system] SELinux support is enabled Dec 12 18:18:38.013103 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 12 18:18:38.022214 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 12 18:18:38.022271 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 12 18:18:38.024557 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 12 18:18:38.025496 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Dec 12 18:18:38.025532 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 12 18:18:38.077183 systemd[1]: Started update-engine.service - Update Engine. Dec 12 18:18:38.083680 update_engine[1593]: I20251212 18:18:38.078500 1593 update_check_scheduler.cc:74] Next update check in 7m24s Dec 12 18:18:38.103824 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 12 18:18:38.171955 bash[1646]: Updated "/home/core/.ssh/authorized_keys" Dec 12 18:18:38.168979 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 12 18:18:38.180715 systemd[1]: Starting sshkeys.service... Dec 12 18:18:38.225463 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 12 18:18:38.230462 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 12 18:18:38.288962 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 18:18:38.290425 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:18:38.318939 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:18:38.336267 kernel: EXT4-fs (vda9): resized filesystem to 14138363 Dec 12 18:18:38.355179 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 12 18:18:38.357420 systemd-logind[1590]: New seat seat0. Dec 12 18:18:38.362336 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 12 18:18:38.378187 systemd-logind[1590]: Watching system buttons on /dev/input/event2 (Power Button) Dec 12 18:18:38.378216 systemd-logind[1590]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 12 18:18:38.378535 extend-filesystems[1632]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 12 18:18:38.378535 extend-filesystems[1632]: old_desc_blocks = 1, new_desc_blocks = 7 Dec 12 18:18:38.378535 extend-filesystems[1632]: The filesystem on /dev/vda9 is now 14138363 (4k) blocks long. Dec 12 18:18:38.398855 extend-filesystems[1578]: Resized filesystem in /dev/vda9 Dec 12 18:18:38.379697 systemd[1]: Started systemd-logind.service - User Login Management. Dec 12 18:18:38.386164 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 12 18:18:38.386565 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 12 18:18:38.526223 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 18:18:38.526733 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:18:38.537829 coreos-metadata[1657]: Dec 12 18:18:38.537 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Dec 12 18:18:38.537829 coreos-metadata[1657]: Dec 12 18:18:38.537 INFO Fetch successful Dec 12 18:18:38.540212 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:18:38.579005 unknown[1657]: wrote ssh authorized keys file for user: core Dec 12 18:18:38.608139 systemd-networkd[1510]: eth1: Gained IPv6LL Dec 12 18:18:38.616422 systemd-timesyncd[1479]: Network configuration changed, trying to establish connection. Dec 12 18:18:38.623640 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 12 18:18:38.630618 systemd[1]: Reached target network-online.target - Network is Online. Dec 12 18:18:38.641444 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:18:38.647456 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 12 18:18:38.694023 update-ssh-keys[1675]: Updated "/home/core/.ssh/authorized_keys" Dec 12 18:18:38.693532 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 12 18:18:38.703529 systemd[1]: Finished sshkeys.service. Dec 12 18:18:38.749894 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 12 18:18:38.768365 containerd[1616]: time="2025-12-12T18:18:38Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 12 18:18:38.768365 containerd[1616]: time="2025-12-12T18:18:38.767079646Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Dec 12 18:18:38.871827 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 12 18:18:38.905915 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:18:38.911926 containerd[1616]: time="2025-12-12T18:18:38.910666468Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="15.905µs" Dec 12 18:18:38.911926 containerd[1616]: time="2025-12-12T18:18:38.910710648Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 12 18:18:38.911926 containerd[1616]: time="2025-12-12T18:18:38.910775351Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 12 18:18:38.911926 containerd[1616]: time="2025-12-12T18:18:38.910813123Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 12 18:18:38.922145 kernel: EDAC MC: Ver: 3.0.0 Dec 12 18:18:38.922222 containerd[1616]: time="2025-12-12T18:18:38.917525582Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 12 18:18:38.922222 containerd[1616]: time="2025-12-12T18:18:38.917576942Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 18:18:38.922222 containerd[1616]: time="2025-12-12T18:18:38.917678534Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 18:18:38.922222 containerd[1616]: time="2025-12-12T18:18:38.917697495Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 18:18:38.922222 containerd[1616]: time="2025-12-12T18:18:38.918061990Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 18:18:38.922222 containerd[1616]: time="2025-12-12T18:18:38.918091088Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 18:18:38.922222 containerd[1616]: time="2025-12-12T18:18:38.918112771Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 18:18:38.922222 containerd[1616]: time="2025-12-12T18:18:38.918127116Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 12 18:18:38.934344 containerd[1616]: time="2025-12-12T18:18:38.927548496Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 12 18:18:38.934344 containerd[1616]: time="2025-12-12T18:18:38.927585326Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 12 18:18:38.934344 containerd[1616]: time="2025-12-12T18:18:38.927736938Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 12 18:18:38.956641 containerd[1616]: time="2025-12-12T18:18:38.956574500Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 18:18:38.957399 containerd[1616]: time="2025-12-12T18:18:38.957365223Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 18:18:38.960357 containerd[1616]: time="2025-12-12T18:18:38.960293933Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 12 18:18:38.963362 containerd[1616]: time="2025-12-12T18:18:38.960575826Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 12 18:18:38.963362 containerd[1616]: time="2025-12-12T18:18:38.960957061Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 12 18:18:38.963362 containerd[1616]: time="2025-12-12T18:18:38.961133696Z" level=info msg="metadata content store policy set" policy=shared Dec 12 18:18:38.979566 containerd[1616]: time="2025-12-12T18:18:38.978678742Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 12 18:18:38.979566 containerd[1616]: time="2025-12-12T18:18:38.978872559Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 12 18:18:38.979566 containerd[1616]: time="2025-12-12T18:18:38.978981923Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 12 18:18:38.979566 containerd[1616]: time="2025-12-12T18:18:38.978996845Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 12 18:18:38.979566 containerd[1616]: time="2025-12-12T18:18:38.979012929Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 12 18:18:38.979566 containerd[1616]: time="2025-12-12T18:18:38.979029048Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 12 18:18:38.979566 containerd[1616]: time="2025-12-12T18:18:38.979045894Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 12 18:18:38.979566 containerd[1616]: time="2025-12-12T18:18:38.979062786Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 12 18:18:38.979566 containerd[1616]: time="2025-12-12T18:18:38.979079543Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 12 18:18:38.979566 containerd[1616]: time="2025-12-12T18:18:38.979100729Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 12 18:18:38.979566 containerd[1616]: time="2025-12-12T18:18:38.979152916Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 12 18:18:38.979566 containerd[1616]: time="2025-12-12T18:18:38.979178966Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 12 18:18:38.979566 containerd[1616]: time="2025-12-12T18:18:38.979193497Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 12 18:18:38.979566 containerd[1616]: time="2025-12-12T18:18:38.979216084Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 12 18:18:38.989298 locksmithd[1647]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 12 18:18:38.995880 containerd[1616]: time="2025-12-12T18:18:38.990848251Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 12 18:18:38.995880 containerd[1616]: time="2025-12-12T18:18:38.993048088Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 12 18:18:38.995880 containerd[1616]: time="2025-12-12T18:18:38.993091238Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 12 18:18:38.995880 containerd[1616]: time="2025-12-12T18:18:38.993813809Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 12 18:18:38.993338 systemd-networkd[1510]: eth0: Gained IPv6LL Dec 12 18:18:38.993855 systemd-timesyncd[1479]: Network configuration changed, trying to establish connection. Dec 12 18:18:39.001549 containerd[1616]: time="2025-12-12T18:18:38.997725173Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 12 18:18:39.006282 containerd[1616]: time="2025-12-12T18:18:39.003232609Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 12 18:18:39.009653 containerd[1616]: time="2025-12-12T18:18:39.008849590Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 12 18:18:39.009653 containerd[1616]: time="2025-12-12T18:18:39.009116660Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 12 18:18:39.009653 containerd[1616]: time="2025-12-12T18:18:39.009150270Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 12 18:18:39.009653 containerd[1616]: time="2025-12-12T18:18:39.009172692Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 12 18:18:39.009653 containerd[1616]: time="2025-12-12T18:18:39.009284639Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 12 18:18:39.009653 containerd[1616]: time="2025-12-12T18:18:39.009423370Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 12 18:18:39.009653 containerd[1616]: time="2025-12-12T18:18:39.009574714Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 12 18:18:39.009653 containerd[1616]: time="2025-12-12T18:18:39.009600588Z" level=info msg="Start snapshots syncer" Dec 12 18:18:39.013659 containerd[1616]: time="2025-12-12T18:18:39.012798947Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 12 18:18:39.021525 containerd[1616]: time="2025-12-12T18:18:39.019546260Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 12 18:18:39.022366 containerd[1616]: time="2025-12-12T18:18:39.021497322Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 12 18:18:39.023920 containerd[1616]: time="2025-12-12T18:18:39.023560988Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 12 18:18:39.029309 containerd[1616]: time="2025-12-12T18:18:39.024573511Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 12 18:18:39.029309 containerd[1616]: time="2025-12-12T18:18:39.028508199Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 12 18:18:39.029309 containerd[1616]: time="2025-12-12T18:18:39.028538773Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 12 18:18:39.029309 containerd[1616]: time="2025-12-12T18:18:39.028569281Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 12 18:18:39.029309 containerd[1616]: time="2025-12-12T18:18:39.028591936Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 12 18:18:39.029309 containerd[1616]: time="2025-12-12T18:18:39.028608784Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 12 18:18:39.029309 containerd[1616]: time="2025-12-12T18:18:39.028626936Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 12 18:18:39.029309 containerd[1616]: time="2025-12-12T18:18:39.028666794Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 12 18:18:39.029309 containerd[1616]: time="2025-12-12T18:18:39.028687592Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 12 18:18:39.029309 containerd[1616]: time="2025-12-12T18:18:39.028777511Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 18:18:39.029309 containerd[1616]: time="2025-12-12T18:18:39.028909570Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 18:18:39.029309 containerd[1616]: time="2025-12-12T18:18:39.028933073Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 18:18:39.029309 containerd[1616]: time="2025-12-12T18:18:39.028947580Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 18:18:39.029309 containerd[1616]: time="2025-12-12T18:18:39.028972853Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 12 18:18:39.030094 containerd[1616]: time="2025-12-12T18:18:39.029011739Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 12 18:18:39.030094 containerd[1616]: time="2025-12-12T18:18:39.029052379Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 12 18:18:39.030094 containerd[1616]: time="2025-12-12T18:18:39.029079048Z" level=info msg="runtime interface created" Dec 12 18:18:39.030094 containerd[1616]: time="2025-12-12T18:18:39.029088687Z" level=info msg="created NRI interface" Dec 12 18:18:39.030094 containerd[1616]: time="2025-12-12T18:18:39.029100483Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 12 18:18:39.030094 containerd[1616]: time="2025-12-12T18:18:39.029134912Z" level=info msg="Connect containerd service" Dec 12 18:18:39.030094 containerd[1616]: time="2025-12-12T18:18:39.029175415Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 12 18:18:39.043398 containerd[1616]: time="2025-12-12T18:18:39.041220096Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 12 18:18:39.280905 sshd_keygen[1627]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 12 18:18:39.365064 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 12 18:18:39.375067 containerd[1616]: time="2025-12-12T18:18:39.375021420Z" level=info msg="Start subscribing containerd event" Dec 12 18:18:39.375439 containerd[1616]: time="2025-12-12T18:18:39.375393621Z" level=info msg="Start recovering state" Dec 12 18:18:39.376043 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 12 18:18:39.380133 containerd[1616]: time="2025-12-12T18:18:39.378456000Z" level=info msg="Start event monitor" Dec 12 18:18:39.381137 containerd[1616]: time="2025-12-12T18:18:39.380449649Z" level=info msg="Start cni network conf syncer for default" Dec 12 18:18:39.381137 containerd[1616]: time="2025-12-12T18:18:39.380604291Z" level=info msg="Start streaming server" Dec 12 18:18:39.381137 containerd[1616]: time="2025-12-12T18:18:39.380624910Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 12 18:18:39.381137 containerd[1616]: time="2025-12-12T18:18:39.381063075Z" level=info msg="runtime interface starting up..." Dec 12 18:18:39.381137 containerd[1616]: time="2025-12-12T18:18:39.381093888Z" level=info msg="starting plugins..." Dec 12 18:18:39.382840 systemd[1]: Started sshd@0-64.23.253.31:22-147.75.109.163:54520.service - OpenSSH per-connection server daemon (147.75.109.163:54520). Dec 12 18:18:39.388973 containerd[1616]: time="2025-12-12T18:18:39.387738003Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 12 18:18:39.388973 containerd[1616]: time="2025-12-12T18:18:39.387823837Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 12 18:18:39.389603 containerd[1616]: time="2025-12-12T18:18:39.389571017Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 12 18:18:39.392657 systemd[1]: Started containerd.service - containerd container runtime. Dec 12 18:18:39.396544 containerd[1616]: time="2025-12-12T18:18:39.392712488Z" level=info msg="containerd successfully booted in 0.627698s" Dec 12 18:18:39.446837 systemd[1]: issuegen.service: Deactivated successfully. Dec 12 18:18:39.447571 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 12 18:18:39.455700 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 12 18:18:39.515873 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 12 18:18:39.523708 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 12 18:18:39.531291 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 12 18:18:39.534895 systemd[1]: Reached target getty.target - Login Prompts. Dec 12 18:18:39.589958 sshd[1726]: Accepted publickey for core from 147.75.109.163 port 54520 ssh2: RSA SHA256:AH9sdAXiVqzWQffvo8h2BLk1KGxpD1YCntXC072RVDo Dec 12 18:18:39.592988 sshd-session[1726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:18:39.615813 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 12 18:18:39.619265 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 12 18:18:39.655383 systemd-logind[1590]: New session 1 of user core. Dec 12 18:18:39.675658 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 12 18:18:39.682773 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 12 18:18:39.703927 (systemd)[1739]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 12 18:18:39.716675 systemd-logind[1590]: New session c1 of user core. Dec 12 18:18:39.966189 tar[1612]: linux-amd64/README.md Dec 12 18:18:39.983019 systemd[1739]: Queued start job for default target default.target. Dec 12 18:18:39.992003 systemd[1739]: Created slice app.slice - User Application Slice. Dec 12 18:18:39.992404 systemd[1739]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Dec 12 18:18:39.992429 systemd[1739]: Reached target paths.target - Paths. Dec 12 18:18:39.992634 systemd[1739]: Reached target timers.target - Timers. Dec 12 18:18:39.997420 systemd[1739]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 12 18:18:39.999534 systemd[1739]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Dec 12 18:18:40.007339 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 12 18:18:40.034371 systemd[1739]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Dec 12 18:18:40.043046 systemd[1739]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 12 18:18:40.044410 systemd[1739]: Reached target sockets.target - Sockets. Dec 12 18:18:40.044852 systemd[1739]: Reached target basic.target - Basic System. Dec 12 18:18:40.045181 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 12 18:18:40.048478 systemd[1739]: Reached target default.target - Main User Target. Dec 12 18:18:40.048546 systemd[1739]: Startup finished in 315ms. Dec 12 18:18:40.059671 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 12 18:18:40.100771 systemd[1]: Started sshd@1-64.23.253.31:22-147.75.109.163:54526.service - OpenSSH per-connection server daemon (147.75.109.163:54526). Dec 12 18:18:40.211193 sshd[1755]: Accepted publickey for core from 147.75.109.163 port 54526 ssh2: RSA SHA256:AH9sdAXiVqzWQffvo8h2BLk1KGxpD1YCntXC072RVDo Dec 12 18:18:40.213941 sshd-session[1755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:18:40.227083 systemd-logind[1590]: New session 2 of user core. Dec 12 18:18:40.233626 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 12 18:18:40.269316 sshd[1758]: Connection closed by 147.75.109.163 port 54526 Dec 12 18:18:40.270290 sshd-session[1755]: pam_unix(sshd:session): session closed for user core Dec 12 18:18:40.286724 systemd[1]: sshd@1-64.23.253.31:22-147.75.109.163:54526.service: Deactivated successfully. Dec 12 18:18:40.290890 systemd[1]: session-2.scope: Deactivated successfully. Dec 12 18:18:40.294819 systemd-logind[1590]: Session 2 logged out. Waiting for processes to exit. Dec 12 18:18:40.298321 systemd-logind[1590]: Removed session 2. Dec 12 18:18:40.303791 systemd[1]: Started sshd@2-64.23.253.31:22-147.75.109.163:54536.service - OpenSSH per-connection server daemon (147.75.109.163:54536). Dec 12 18:18:40.385584 sshd[1764]: Accepted publickey for core from 147.75.109.163 port 54536 ssh2: RSA SHA256:AH9sdAXiVqzWQffvo8h2BLk1KGxpD1YCntXC072RVDo Dec 12 18:18:40.387281 sshd-session[1764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:18:40.399873 systemd-logind[1590]: New session 3 of user core. Dec 12 18:18:40.405570 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 12 18:18:40.436912 sshd[1767]: Connection closed by 147.75.109.163 port 54536 Dec 12 18:18:40.438552 sshd-session[1764]: pam_unix(sshd:session): session closed for user core Dec 12 18:18:40.445052 systemd[1]: sshd@2-64.23.253.31:22-147.75.109.163:54536.service: Deactivated successfully. Dec 12 18:18:40.446080 systemd-logind[1590]: Session 3 logged out. Waiting for processes to exit. Dec 12 18:18:40.449641 systemd[1]: session-3.scope: Deactivated successfully. Dec 12 18:18:40.455533 systemd-logind[1590]: Removed session 3. Dec 12 18:18:40.690526 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:18:40.693429 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 12 18:18:40.695824 systemd[1]: Startup finished in 3.465s (kernel) + 6.436s (initrd) + 7.622s (userspace) = 17.524s. Dec 12 18:18:40.704204 (kubelet)[1777]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 18:18:41.518369 kubelet[1777]: E1212 18:18:41.518285 1777 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 18:18:41.521496 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 18:18:41.521767 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 18:18:41.522585 systemd[1]: kubelet.service: Consumed 1.398s CPU time, 263.7M memory peak. Dec 12 18:18:50.460738 systemd[1]: Started sshd@3-64.23.253.31:22-147.75.109.163:53692.service - OpenSSH per-connection server daemon (147.75.109.163:53692). Dec 12 18:18:50.551133 sshd[1789]: Accepted publickey for core from 147.75.109.163 port 53692 ssh2: RSA SHA256:AH9sdAXiVqzWQffvo8h2BLk1KGxpD1YCntXC072RVDo Dec 12 18:18:50.553429 sshd-session[1789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:18:50.564379 systemd-logind[1590]: New session 4 of user core. Dec 12 18:18:50.569633 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 12 18:18:50.595437 sshd[1792]: Connection closed by 147.75.109.163 port 53692 Dec 12 18:18:50.596273 sshd-session[1789]: pam_unix(sshd:session): session closed for user core Dec 12 18:18:50.610759 systemd[1]: sshd@3-64.23.253.31:22-147.75.109.163:53692.service: Deactivated successfully. Dec 12 18:18:50.613931 systemd[1]: session-4.scope: Deactivated successfully. Dec 12 18:18:50.616085 systemd-logind[1590]: Session 4 logged out. Waiting for processes to exit. Dec 12 18:18:50.620178 systemd[1]: Started sshd@4-64.23.253.31:22-147.75.109.163:53696.service - OpenSSH per-connection server daemon (147.75.109.163:53696). Dec 12 18:18:50.622071 systemd-logind[1590]: Removed session 4. Dec 12 18:18:50.690502 sshd[1798]: Accepted publickey for core from 147.75.109.163 port 53696 ssh2: RSA SHA256:AH9sdAXiVqzWQffvo8h2BLk1KGxpD1YCntXC072RVDo Dec 12 18:18:50.692611 sshd-session[1798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:18:50.701521 systemd-logind[1590]: New session 5 of user core. Dec 12 18:18:50.720779 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 12 18:18:50.739363 sshd[1801]: Connection closed by 147.75.109.163 port 53696 Dec 12 18:18:50.740402 sshd-session[1798]: pam_unix(sshd:session): session closed for user core Dec 12 18:18:50.754397 systemd[1]: sshd@4-64.23.253.31:22-147.75.109.163:53696.service: Deactivated successfully. Dec 12 18:18:50.757327 systemd[1]: session-5.scope: Deactivated successfully. Dec 12 18:18:50.758948 systemd-logind[1590]: Session 5 logged out. Waiting for processes to exit. Dec 12 18:18:50.764840 systemd[1]: Started sshd@5-64.23.253.31:22-147.75.109.163:53708.service - OpenSSH per-connection server daemon (147.75.109.163:53708). Dec 12 18:18:50.766849 systemd-logind[1590]: Removed session 5. Dec 12 18:18:50.838850 sshd[1807]: Accepted publickey for core from 147.75.109.163 port 53708 ssh2: RSA SHA256:AH9sdAXiVqzWQffvo8h2BLk1KGxpD1YCntXC072RVDo Dec 12 18:18:50.840942 sshd-session[1807]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:18:50.849887 systemd-logind[1590]: New session 6 of user core. Dec 12 18:18:50.868595 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 12 18:18:50.893275 sshd[1810]: Connection closed by 147.75.109.163 port 53708 Dec 12 18:18:50.892986 sshd-session[1807]: pam_unix(sshd:session): session closed for user core Dec 12 18:18:50.907960 systemd[1]: sshd@5-64.23.253.31:22-147.75.109.163:53708.service: Deactivated successfully. Dec 12 18:18:50.910646 systemd[1]: session-6.scope: Deactivated successfully. Dec 12 18:18:50.912220 systemd-logind[1590]: Session 6 logged out. Waiting for processes to exit. Dec 12 18:18:50.917297 systemd[1]: Started sshd@6-64.23.253.31:22-147.75.109.163:53712.service - OpenSSH per-connection server daemon (147.75.109.163:53712). Dec 12 18:18:50.918395 systemd-logind[1590]: Removed session 6. Dec 12 18:18:51.012200 sshd[1816]: Accepted publickey for core from 147.75.109.163 port 53712 ssh2: RSA SHA256:AH9sdAXiVqzWQffvo8h2BLk1KGxpD1YCntXC072RVDo Dec 12 18:18:51.015333 sshd-session[1816]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:18:51.026328 systemd-logind[1590]: New session 7 of user core. Dec 12 18:18:51.031673 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 12 18:18:51.074986 sudo[1820]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 12 18:18:51.075924 sudo[1820]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:18:51.094527 sudo[1820]: pam_unix(sudo:session): session closed for user root Dec 12 18:18:51.099265 sshd[1819]: Connection closed by 147.75.109.163 port 53712 Dec 12 18:18:51.100338 sshd-session[1816]: pam_unix(sshd:session): session closed for user core Dec 12 18:18:51.124897 systemd[1]: sshd@6-64.23.253.31:22-147.75.109.163:53712.service: Deactivated successfully. Dec 12 18:18:51.128146 systemd[1]: session-7.scope: Deactivated successfully. Dec 12 18:18:51.129886 systemd-logind[1590]: Session 7 logged out. Waiting for processes to exit. Dec 12 18:18:51.135039 systemd[1]: Started sshd@7-64.23.253.31:22-147.75.109.163:53722.service - OpenSSH per-connection server daemon (147.75.109.163:53722). Dec 12 18:18:51.136635 systemd-logind[1590]: Removed session 7. Dec 12 18:18:51.222125 sshd[1826]: Accepted publickey for core from 147.75.109.163 port 53722 ssh2: RSA SHA256:AH9sdAXiVqzWQffvo8h2BLk1KGxpD1YCntXC072RVDo Dec 12 18:18:51.224151 sshd-session[1826]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:18:51.233022 systemd-logind[1590]: New session 8 of user core. Dec 12 18:18:51.245662 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 12 18:18:51.271139 sudo[1831]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 12 18:18:51.271589 sudo[1831]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:18:51.279802 sudo[1831]: pam_unix(sudo:session): session closed for user root Dec 12 18:18:51.289912 sudo[1830]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 12 18:18:51.290946 sudo[1830]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:18:51.307097 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 18:18:51.367000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 12 18:18:51.368985 kernel: kauditd_printk_skb: 128 callbacks suppressed Dec 12 18:18:51.369093 kernel: audit: type=1305 audit(1765563531.367:227): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 12 18:18:51.371677 augenrules[1853]: No rules Dec 12 18:18:51.367000 audit[1853]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffda9ef03f0 a2=420 a3=0 items=0 ppid=1834 pid=1853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:18:51.377749 kernel: audit: type=1300 audit(1765563531.367:227): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffda9ef03f0 a2=420 a3=0 items=0 ppid=1834 pid=1853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:18:51.381330 kernel: audit: type=1327 audit(1765563531.367:227): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 12 18:18:51.367000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 12 18:18:51.378308 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 18:18:51.378706 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 18:18:51.377000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:51.382956 sudo[1830]: pam_unix(sudo:session): session closed for user root Dec 12 18:18:51.386333 kernel: audit: type=1130 audit(1765563531.377:228): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:51.377000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:51.382000 audit[1830]: USER_END pid=1830 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 18:18:51.391548 sshd[1829]: Connection closed by 147.75.109.163 port 53722 Dec 12 18:18:51.395602 kernel: audit: type=1131 audit(1765563531.377:229): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:51.395751 kernel: audit: type=1106 audit(1765563531.382:230): pid=1830 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 18:18:51.395902 sshd-session[1826]: pam_unix(sshd:session): session closed for user core Dec 12 18:18:51.382000 audit[1830]: CRED_DISP pid=1830 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 18:18:51.400150 kernel: audit: type=1104 audit(1765563531.382:231): pid=1830 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 18:18:51.401000 audit[1826]: USER_END pid=1826 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:18:51.409320 kernel: audit: type=1106 audit(1765563531.401:232): pid=1826 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:18:51.401000 audit[1826]: CRED_DISP pid=1826 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:18:51.415317 kernel: audit: type=1104 audit(1765563531.401:233): pid=1826 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:18:51.416236 systemd[1]: sshd@7-64.23.253.31:22-147.75.109.163:53722.service: Deactivated successfully. Dec 12 18:18:51.416000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-64.23.253.31:22-147.75.109.163:53722 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:51.419629 systemd[1]: session-8.scope: Deactivated successfully. Dec 12 18:18:51.421867 systemd-logind[1590]: Session 8 logged out. Waiting for processes to exit. Dec 12 18:18:51.422285 kernel: audit: type=1131 audit(1765563531.416:234): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-64.23.253.31:22-147.75.109.163:53722 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:51.427195 systemd[1]: Started sshd@8-64.23.253.31:22-147.75.109.163:53732.service - OpenSSH per-connection server daemon (147.75.109.163:53732). Dec 12 18:18:51.427000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-64.23.253.31:22-147.75.109.163:53732 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:51.428888 systemd-logind[1590]: Removed session 8. Dec 12 18:18:51.498000 audit[1862]: USER_ACCT pid=1862 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:18:51.499060 sshd[1862]: Accepted publickey for core from 147.75.109.163 port 53732 ssh2: RSA SHA256:AH9sdAXiVqzWQffvo8h2BLk1KGxpD1YCntXC072RVDo Dec 12 18:18:51.500000 audit[1862]: CRED_ACQ pid=1862 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:18:51.500000 audit[1862]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc428c9870 a2=3 a3=0 items=0 ppid=1 pid=1862 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:18:51.500000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:18:51.501055 sshd-session[1862]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:18:51.511636 systemd-logind[1590]: New session 9 of user core. Dec 12 18:18:51.514666 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 12 18:18:51.519000 audit[1862]: USER_START pid=1862 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:18:51.522797 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 12 18:18:51.523000 audit[1865]: CRED_ACQ pid=1865 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:18:51.528068 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:18:51.542000 audit[1866]: USER_ACCT pid=1866 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 18:18:51.543274 sudo[1866]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 12 18:18:51.543000 audit[1866]: CRED_REFR pid=1866 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 18:18:51.544185 sudo[1866]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:18:51.551000 audit[1866]: USER_START pid=1866 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 18:18:51.790863 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:18:51.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:51.808220 (kubelet)[1886]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 18:18:51.907007 kubelet[1886]: E1212 18:18:51.906932 1886 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 18:18:51.912000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 12 18:18:51.912473 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 18:18:51.912695 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 18:18:51.913296 systemd[1]: kubelet.service: Consumed 270ms CPU time, 110.6M memory peak. Dec 12 18:18:52.212719 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 12 18:18:52.233910 (dockerd)[1900]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 12 18:18:52.762776 dockerd[1900]: time="2025-12-12T18:18:52.762710070Z" level=info msg="Starting up" Dec 12 18:18:52.767340 dockerd[1900]: time="2025-12-12T18:18:52.767291250Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 12 18:18:52.789269 dockerd[1900]: time="2025-12-12T18:18:52.789112697Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 12 18:18:52.873410 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport4082415764-merged.mount: Deactivated successfully. Dec 12 18:18:52.912296 dockerd[1900]: time="2025-12-12T18:18:52.912091792Z" level=info msg="Loading containers: start." Dec 12 18:18:52.927314 kernel: Initializing XFRM netlink socket Dec 12 18:18:53.018000 audit[1949]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1949 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:18:53.018000 audit[1949]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fff304474a0 a2=0 a3=0 items=0 ppid=1900 pid=1949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:18:53.018000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 12 18:18:53.022000 audit[1951]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1951 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:18:53.022000 audit[1951]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffed0d1b3e0 a2=0 a3=0 items=0 ppid=1900 pid=1951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:18:53.022000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 12 18:18:53.026000 audit[1953]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1953 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:18:53.026000 audit[1953]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffb5226550 a2=0 a3=0 items=0 ppid=1900 pid=1953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:18:53.026000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 12 18:18:53.030000 audit[1955]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1955 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:18:53.030000 audit[1955]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc7ccbaee0 a2=0 a3=0 items=0 ppid=1900 pid=1955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:18:53.030000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 12 18:18:53.034000 audit[1957]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1957 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:18:53.034000 audit[1957]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc8a8bf8f0 a2=0 a3=0 items=0 ppid=1900 pid=1957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:18:53.034000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 12 18:18:53.037000 audit[1959]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1959 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:18:53.037000 audit[1959]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd9143dbb0 a2=0 a3=0 items=0 ppid=1900 pid=1959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:18:53.037000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 12 18:18:53.041000 audit[1961]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1961 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:18:53.041000 audit[1961]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd0d814030 a2=0 a3=0 items=0 ppid=1900 pid=1961 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:18:53.041000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 12 18:18:53.045000 audit[1963]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1963 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:18:53.045000 audit[1963]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffe90a4cf30 a2=0 a3=0 items=0 ppid=1900 pid=1963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:18:53.045000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 12 18:18:53.087000 audit[1966]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1966 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:18:53.087000 audit[1966]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7ffc855fe540 a2=0 a3=0 items=0 ppid=1900 pid=1966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:18:53.087000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Dec 12 18:18:53.094000 audit[1968]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1968 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:18:53.094000 audit[1968]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffe2a14ba20 a2=0 a3=0 items=0 ppid=1900 pid=1968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:18:53.094000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 12 18:18:53.098000 audit[1970]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1970 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:18:53.098000 audit[1970]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffc700d1f70 a2=0 a3=0 items=0 ppid=1900 pid=1970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:18:53.098000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 12 18:18:53.102000 audit[1972]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1972 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:18:53.102000 audit[1972]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffd74af4c10 a2=0 a3=0 items=0 ppid=1900 pid=1972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:18:53.102000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 12 18:18:53.106000 audit[1974]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1974 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:18:53.106000 audit[1974]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffcc6d109d0 a2=0 a3=0 items=0 ppid=1900 pid=1974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:18:53.106000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 12 18:18:53.173000 audit[2004]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=2004 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:18:53.173000 audit[2004]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffc9c3791c0 a2=0 a3=0 items=0 ppid=1900 pid=2004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:18:53.173000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 12 18:18:53.177000 audit[2006]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=2006 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:18:53.177000 audit[2006]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffce7b7aed0 a2=0 a3=0 items=0 ppid=1900 pid=2006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:18:53.177000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 12 18:18:53.181000 audit[2008]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=2008 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:18:53.181000 audit[2008]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe890e3ae0 a2=0 a3=0 items=0 ppid=1900 pid=2008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:18:53.181000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 12 18:18:53.185000 audit[2010]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=2010 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:18:53.185000 audit[2010]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff318635d0 a2=0 a3=0 items=0 ppid=1900 pid=2010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:18:53.185000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 12 18:18:53.189000 audit[2012]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2012 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:18:53.189000 audit[2012]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffcf61a0990 a2=0 a3=0 items=0 ppid=1900 pid=2012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:18:53.189000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 12 18:18:53.193000 audit[2014]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2014 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:18:53.193000 audit[2014]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe9a795f40 a2=0 a3=0 items=0 ppid=1900 pid=2014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:18:53.193000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 12 18:18:53.196000 audit[2016]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2016 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:18:53.196000 audit[2016]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff741177a0 a2=0 a3=0 items=0 ppid=1900 pid=2016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:18:53.196000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 12 18:18:53.201000 audit[2018]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2018 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:18:53.201000 audit[2018]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffc3675ae00 a2=0 a3=0 items=0 ppid=1900 pid=2018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:18:53.201000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 12 18:18:53.205000 audit[2020]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2020 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:18:53.205000 audit[2020]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffec7892300 a2=0 a3=0 items=0 ppid=1900 pid=2020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:18:53.205000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Dec 12 18:18:53.209000 audit[2022]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2022 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:18:53.209000 audit[2022]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffd3bbee710 a2=0 a3=0 items=0 ppid=1900 pid=2022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:18:53.209000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 12 18:18:53.213000 audit[2024]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2024 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:18:53.213000 audit[2024]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffd8ee72320 a2=0 a3=0 items=0 ppid=1900 pid=2024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:18:53.213000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 12 18:18:53.217000 audit[2026]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2026 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:18:53.217000 audit[2026]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffc2ff405a0 a2=0 a3=0 items=0 ppid=1900 pid=2026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:18:53.217000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 12 18:18:53.222000 audit[2028]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2028 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:18:53.222000 audit[2028]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffcc4de8c70 a2=0 a3=0 items=0 ppid=1900 pid=2028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:18:53.222000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 12 18:18:53.232000 audit[2033]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2033 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:18:53.232000 audit[2033]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff2e32dec0 a2=0 a3=0 items=0 ppid=1900 pid=2033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:18:53.232000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 12 18:18:53.237000 audit[2035]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2035 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:18:53.237000 audit[2035]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffe9ed8b220 a2=0 a3=0 items=0 ppid=1900 pid=2035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:18:53.237000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 12 18:18:53.242000 audit[2037]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2037 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:18:53.242000 audit[2037]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fffbad4dde0 a2=0 a3=0 items=0 ppid=1900 pid=2037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:18:53.242000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 12 18:18:53.246000 audit[2039]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2039 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:18:53.246000 audit[2039]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffe0058590 a2=0 a3=0 items=0 ppid=1900 pid=2039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:18:53.246000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 12 18:18:53.250000 audit[2041]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2041 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:18:53.250000 audit[2041]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7fff34b0b5e0 a2=0 a3=0 items=0 ppid=1900 pid=2041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:18:53.250000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 12 18:18:53.254000 audit[2043]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2043 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:18:53.254000 audit[2043]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffe71fd14f0 a2=0 a3=0 items=0 ppid=1900 pid=2043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:18:53.254000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 12 18:18:53.270321 systemd-timesyncd[1479]: Network configuration changed, trying to establish connection. Dec 12 18:18:53.303000 audit[2049]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2049 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:18:53.303000 audit[2049]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7fff90541780 a2=0 a3=0 items=0 ppid=1900 pid=2049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:18:53.303000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Dec 12 18:18:53.311000 audit[2051]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2051 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:18:53.311000 audit[2051]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffefe5a2d00 a2=0 a3=0 items=0 ppid=1900 pid=2051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:18:53.311000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Dec 12 18:18:53.329000 audit[2059]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2059 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:18:53.329000 audit[2059]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7fffa7d7dd10 a2=0 a3=0 items=0 ppid=1900 pid=2059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:18:53.329000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Dec 12 18:18:53.346000 audit[2065]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2065 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:18:53.346000 audit[2065]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffc6574a570 a2=0 a3=0 items=0 ppid=1900 pid=2065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:18:53.346000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Dec 12 18:18:53.351000 audit[2067]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2067 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:18:53.351000 audit[2067]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffcefe29fb0 a2=0 a3=0 items=0 ppid=1900 pid=2067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:18:53.351000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Dec 12 18:18:53.355000 audit[2069]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2069 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:18:53.355000 audit[2069]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff2a191db0 a2=0 a3=0 items=0 ppid=1900 pid=2069 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:18:53.355000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Dec 12 18:18:53.359000 audit[2071]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2071 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:18:53.359000 audit[2071]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffc16253d20 a2=0 a3=0 items=0 ppid=1900 pid=2071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:18:53.359000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 12 18:18:53.363000 audit[2073]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2073 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:18:53.363000 audit[2073]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff868c50a0 a2=0 a3=0 items=0 ppid=1900 pid=2073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:18:53.363000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Dec 12 18:18:53.365379 systemd-networkd[1510]: docker0: Link UP Dec 12 18:18:53.376639 dockerd[1900]: time="2025-12-12T18:18:53.376414883Z" level=info msg="Loading containers: done." Dec 12 18:18:53.948388 systemd-timesyncd[1479]: Contacted time server 5.161.191.31:123 (2.flatcar.pool.ntp.org). Dec 12 18:18:53.948557 systemd-timesyncd[1479]: Initial clock synchronization to Fri 2025-12-12 18:18:53.947975 UTC. Dec 12 18:18:53.949951 systemd-resolved[1288]: Clock change detected. Flushing caches. Dec 12 18:18:53.952513 dockerd[1900]: time="2025-12-12T18:18:53.952195119Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 12 18:18:53.952513 dockerd[1900]: time="2025-12-12T18:18:53.952313695Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 12 18:18:53.952513 dockerd[1900]: time="2025-12-12T18:18:53.952424753Z" level=info msg="Initializing buildkit" Dec 12 18:18:53.988923 dockerd[1900]: time="2025-12-12T18:18:53.988818602Z" level=info msg="Completed buildkit initialization" Dec 12 18:18:54.001551 dockerd[1900]: time="2025-12-12T18:18:54.001419240Z" level=info msg="Daemon has completed initialization" Dec 12 18:18:54.002526 dockerd[1900]: time="2025-12-12T18:18:54.001827973Z" level=info msg="API listen on /run/docker.sock" Dec 12 18:18:54.002869 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 12 18:18:54.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:18:54.988145 containerd[1616]: time="2025-12-12T18:18:54.988029767Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\"" Dec 12 18:18:55.614234 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2866054176.mount: Deactivated successfully. Dec 12 18:18:57.070597 containerd[1616]: time="2025-12-12T18:18:57.070535289Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:18:57.073267 containerd[1616]: time="2025-12-12T18:18:57.073177236Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.10: active requests=0, bytes read=27403437" Dec 12 18:18:57.074502 containerd[1616]: time="2025-12-12T18:18:57.074389399Z" level=info msg="ImageCreate event name:\"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:18:57.081514 containerd[1616]: time="2025-12-12T18:18:57.081056251Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:18:57.084112 containerd[1616]: time="2025-12-12T18:18:57.084055536Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.10\" with image id \"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\", size \"29068782\" in 2.09529328s" Dec 12 18:18:57.084330 containerd[1616]: time="2025-12-12T18:18:57.084308997Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\" returns image reference \"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\"" Dec 12 18:18:57.085084 containerd[1616]: time="2025-12-12T18:18:57.085051417Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\"" Dec 12 18:18:58.726551 containerd[1616]: time="2025-12-12T18:18:58.725704628Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:18:58.727586 containerd[1616]: time="2025-12-12T18:18:58.727534530Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.10: active requests=0, bytes read=0" Dec 12 18:18:58.729232 containerd[1616]: time="2025-12-12T18:18:58.729121714Z" level=info msg="ImageCreate event name:\"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:18:58.734060 containerd[1616]: time="2025-12-12T18:18:58.733982007Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:18:58.736532 containerd[1616]: time="2025-12-12T18:18:58.736106615Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.10\" with image id \"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\", size \"26649046\" in 1.650756815s" Dec 12 18:18:58.736532 containerd[1616]: time="2025-12-12T18:18:58.736159548Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\" returns image reference \"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\"" Dec 12 18:18:58.737103 containerd[1616]: time="2025-12-12T18:18:58.737075099Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\"" Dec 12 18:19:00.362988 containerd[1616]: time="2025-12-12T18:19:00.362883450Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:19:00.366546 containerd[1616]: time="2025-12-12T18:19:00.366185882Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.10: active requests=0, bytes read=19396111" Dec 12 18:19:00.368051 containerd[1616]: time="2025-12-12T18:19:00.367955033Z" level=info msg="ImageCreate event name:\"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:19:00.374108 containerd[1616]: time="2025-12-12T18:19:00.374006971Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:19:00.377080 containerd[1616]: time="2025-12-12T18:19:00.375924205Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.10\" with image id \"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\", size \"21061302\" in 1.63868621s" Dec 12 18:19:00.377080 containerd[1616]: time="2025-12-12T18:19:00.375984752Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\" returns image reference \"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\"" Dec 12 18:19:00.377736 containerd[1616]: time="2025-12-12T18:19:00.377654412Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\"" Dec 12 18:19:00.381728 systemd-resolved[1288]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Dec 12 18:19:01.651689 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1011947695.mount: Deactivated successfully. Dec 12 18:19:02.599375 containerd[1616]: time="2025-12-12T18:19:02.599295520Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:19:02.600947 containerd[1616]: time="2025-12-12T18:19:02.600893245Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.10: active requests=0, bytes read=19571995" Dec 12 18:19:02.604064 containerd[1616]: time="2025-12-12T18:19:02.603960627Z" level=info msg="ImageCreate event name:\"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:19:02.610509 containerd[1616]: time="2025-12-12T18:19:02.609690490Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:19:02.611034 containerd[1616]: time="2025-12-12T18:19:02.610974894Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.10\" with image id \"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\", repo tag \"registry.k8s.io/kube-proxy:v1.32.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\", size \"31160442\" in 2.233001036s" Dec 12 18:19:02.611307 containerd[1616]: time="2025-12-12T18:19:02.611279062Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\" returns image reference \"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\"" Dec 12 18:19:02.611985 containerd[1616]: time="2025-12-12T18:19:02.611936698Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Dec 12 18:19:02.714903 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 12 18:19:02.718199 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:19:02.959206 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:19:02.958000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:19:02.961128 kernel: kauditd_printk_skb: 134 callbacks suppressed Dec 12 18:19:02.961242 kernel: audit: type=1130 audit(1765563542.958:287): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:19:02.974358 (kubelet)[2203]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 18:19:03.066760 kubelet[2203]: E1212 18:19:03.066406 2203 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 18:19:03.071359 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 18:19:03.071667 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 18:19:03.071000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 12 18:19:03.073177 systemd[1]: kubelet.service: Consumed 277ms CPU time, 108.5M memory peak. Dec 12 18:19:03.079880 kernel: audit: type=1131 audit(1765563543.071:288): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 12 18:19:03.377150 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1828275420.mount: Deactivated successfully. Dec 12 18:19:03.478758 systemd-resolved[1288]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Dec 12 18:19:04.569510 containerd[1616]: time="2025-12-12T18:19:04.569358081Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:19:04.573270 containerd[1616]: time="2025-12-12T18:19:04.573207108Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=17569900" Dec 12 18:19:04.575433 containerd[1616]: time="2025-12-12T18:19:04.575309969Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:19:04.582814 containerd[1616]: time="2025-12-12T18:19:04.582711786Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:19:04.587675 containerd[1616]: time="2025-12-12T18:19:04.587099734Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.974933836s" Dec 12 18:19:04.587675 containerd[1616]: time="2025-12-12T18:19:04.587156602Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Dec 12 18:19:04.587908 containerd[1616]: time="2025-12-12T18:19:04.587789528Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 12 18:19:05.125964 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2829752077.mount: Deactivated successfully. Dec 12 18:19:05.143525 containerd[1616]: time="2025-12-12T18:19:05.142646968Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 18:19:05.145929 containerd[1616]: time="2025-12-12T18:19:05.145849788Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 12 18:19:05.146440 containerd[1616]: time="2025-12-12T18:19:05.146403694Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 18:19:05.150130 containerd[1616]: time="2025-12-12T18:19:05.150072334Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 18:19:05.151889 containerd[1616]: time="2025-12-12T18:19:05.151840796Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 564.019444ms" Dec 12 18:19:05.152320 containerd[1616]: time="2025-12-12T18:19:05.152058561Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Dec 12 18:19:05.153078 containerd[1616]: time="2025-12-12T18:19:05.153038946Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Dec 12 18:19:06.033415 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2585609755.mount: Deactivated successfully. Dec 12 18:19:09.748527 containerd[1616]: time="2025-12-12T18:19:09.747992811Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:19:09.751078 containerd[1616]: time="2025-12-12T18:19:09.751022402Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=46789595" Dec 12 18:19:09.752310 containerd[1616]: time="2025-12-12T18:19:09.752234255Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:19:09.759683 containerd[1616]: time="2025-12-12T18:19:09.759560974Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:19:09.760470 containerd[1616]: time="2025-12-12T18:19:09.760243734Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 4.607167122s" Dec 12 18:19:09.760470 containerd[1616]: time="2025-12-12T18:19:09.760286815Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Dec 12 18:19:12.217792 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:19:12.218121 systemd[1]: kubelet.service: Consumed 277ms CPU time, 108.5M memory peak. Dec 12 18:19:12.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:19:12.227641 kernel: audit: type=1130 audit(1765563552.216:289): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:19:12.227170 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:19:12.216000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:19:12.237518 kernel: audit: type=1131 audit(1765563552.216:290): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:19:12.277434 systemd[1]: Reload requested from client PID 2347 ('systemctl') (unit session-9.scope)... Dec 12 18:19:12.277462 systemd[1]: Reloading... Dec 12 18:19:12.473535 zram_generator::config[2393]: No configuration found. Dec 12 18:19:12.958035 systemd[1]: Reloading finished in 679 ms. Dec 12 18:19:12.984000 audit: BPF prog-id=61 op=LOAD Dec 12 18:19:12.989528 kernel: audit: type=1334 audit(1765563552.984:291): prog-id=61 op=LOAD Dec 12 18:19:12.988000 audit: BPF prog-id=53 op=UNLOAD Dec 12 18:19:12.994511 kernel: audit: type=1334 audit(1765563552.988:292): prog-id=53 op=UNLOAD Dec 12 18:19:12.989000 audit: BPF prog-id=62 op=LOAD Dec 12 18:19:12.999535 kernel: audit: type=1334 audit(1765563552.989:293): prog-id=62 op=LOAD Dec 12 18:19:12.989000 audit: BPF prog-id=56 op=UNLOAD Dec 12 18:19:13.003535 kernel: audit: type=1334 audit(1765563552.989:294): prog-id=56 op=UNLOAD Dec 12 18:19:12.993000 audit: BPF prog-id=63 op=LOAD Dec 12 18:19:12.993000 audit: BPF prog-id=50 op=UNLOAD Dec 12 18:19:13.007810 kernel: audit: type=1334 audit(1765563552.993:295): prog-id=63 op=LOAD Dec 12 18:19:13.007911 kernel: audit: type=1334 audit(1765563552.993:296): prog-id=50 op=UNLOAD Dec 12 18:19:12.993000 audit: BPF prog-id=64 op=LOAD Dec 12 18:19:13.015518 kernel: audit: type=1334 audit(1765563552.993:297): prog-id=64 op=LOAD Dec 12 18:19:13.015651 kernel: audit: type=1334 audit(1765563552.993:298): prog-id=65 op=LOAD Dec 12 18:19:12.993000 audit: BPF prog-id=65 op=LOAD Dec 12 18:19:12.993000 audit: BPF prog-id=51 op=UNLOAD Dec 12 18:19:12.993000 audit: BPF prog-id=52 op=UNLOAD Dec 12 18:19:12.998000 audit: BPF prog-id=66 op=LOAD Dec 12 18:19:12.998000 audit: BPF prog-id=44 op=UNLOAD Dec 12 18:19:12.998000 audit: BPF prog-id=67 op=LOAD Dec 12 18:19:12.998000 audit: BPF prog-id=68 op=LOAD Dec 12 18:19:12.998000 audit: BPF prog-id=45 op=UNLOAD Dec 12 18:19:12.998000 audit: BPF prog-id=46 op=UNLOAD Dec 12 18:19:13.009000 audit: BPF prog-id=69 op=LOAD Dec 12 18:19:13.009000 audit: BPF prog-id=41 op=UNLOAD Dec 12 18:19:13.009000 audit: BPF prog-id=70 op=LOAD Dec 12 18:19:13.009000 audit: BPF prog-id=71 op=LOAD Dec 12 18:19:13.009000 audit: BPF prog-id=42 op=UNLOAD Dec 12 18:19:13.009000 audit: BPF prog-id=43 op=UNLOAD Dec 12 18:19:13.009000 audit: BPF prog-id=72 op=LOAD Dec 12 18:19:13.009000 audit: BPF prog-id=73 op=LOAD Dec 12 18:19:13.009000 audit: BPF prog-id=54 op=UNLOAD Dec 12 18:19:13.009000 audit: BPF prog-id=55 op=UNLOAD Dec 12 18:19:13.012000 audit: BPF prog-id=74 op=LOAD Dec 12 18:19:13.012000 audit: BPF prog-id=58 op=UNLOAD Dec 12 18:19:13.013000 audit: BPF prog-id=75 op=LOAD Dec 12 18:19:13.013000 audit: BPF prog-id=76 op=LOAD Dec 12 18:19:13.013000 audit: BPF prog-id=59 op=UNLOAD Dec 12 18:19:13.013000 audit: BPF prog-id=60 op=UNLOAD Dec 12 18:19:13.013000 audit: BPF prog-id=77 op=LOAD Dec 12 18:19:13.013000 audit: BPF prog-id=57 op=UNLOAD Dec 12 18:19:13.017000 audit: BPF prog-id=78 op=LOAD Dec 12 18:19:13.017000 audit: BPF prog-id=47 op=UNLOAD Dec 12 18:19:13.018000 audit: BPF prog-id=79 op=LOAD Dec 12 18:19:13.018000 audit: BPF prog-id=80 op=LOAD Dec 12 18:19:13.018000 audit: BPF prog-id=48 op=UNLOAD Dec 12 18:19:13.018000 audit: BPF prog-id=49 op=UNLOAD Dec 12 18:19:13.045000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:19:13.049172 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:19:13.057890 systemd[1]: kubelet.service: Deactivated successfully. Dec 12 18:19:13.057000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:19:13.058514 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:19:13.058602 systemd[1]: kubelet.service: Consumed 162ms CPU time, 98.2M memory peak. Dec 12 18:19:13.061949 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:19:13.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:19:13.269413 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:19:13.282230 (kubelet)[2449]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 18:19:13.361377 kubelet[2449]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 18:19:13.361377 kubelet[2449]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 18:19:13.361377 kubelet[2449]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 18:19:13.361377 kubelet[2449]: I1212 18:19:13.359458 2449 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 18:19:14.050555 kubelet[2449]: I1212 18:19:14.050459 2449 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 12 18:19:14.050555 kubelet[2449]: I1212 18:19:14.050554 2449 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 18:19:14.051461 kubelet[2449]: I1212 18:19:14.051432 2449 server.go:954] "Client rotation is on, will bootstrap in background" Dec 12 18:19:14.151407 kubelet[2449]: I1212 18:19:14.151278 2449 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 18:19:14.177655 kubelet[2449]: E1212 18:19:14.177544 2449 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://64.23.253.31:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 64.23.253.31:6443: connect: connection refused" logger="UnhandledError" Dec 12 18:19:14.210668 kubelet[2449]: I1212 18:19:14.210633 2449 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 18:19:14.219474 kubelet[2449]: I1212 18:19:14.219409 2449 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 12 18:19:14.227319 kubelet[2449]: I1212 18:19:14.227194 2449 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 18:19:14.227592 kubelet[2449]: I1212 18:19:14.227306 2449 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4515.1.0-f-8be9c60ab1","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 18:19:14.229671 kubelet[2449]: I1212 18:19:14.229588 2449 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 18:19:14.229671 kubelet[2449]: I1212 18:19:14.229653 2449 container_manager_linux.go:304] "Creating device plugin manager" Dec 12 18:19:14.231334 kubelet[2449]: I1212 18:19:14.231270 2449 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:19:14.238745 kubelet[2449]: I1212 18:19:14.238670 2449 kubelet.go:446] "Attempting to sync node with API server" Dec 12 18:19:14.239010 kubelet[2449]: I1212 18:19:14.238757 2449 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 18:19:14.243586 kubelet[2449]: I1212 18:19:14.243515 2449 kubelet.go:352] "Adding apiserver pod source" Dec 12 18:19:14.243586 kubelet[2449]: I1212 18:19:14.243576 2449 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 18:19:14.290727 kubelet[2449]: I1212 18:19:14.290666 2449 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 12 18:19:14.318763 kubelet[2449]: W1212 18:19:14.318341 2449 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://64.23.253.31:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 64.23.253.31:6443: connect: connection refused Dec 12 18:19:14.318763 kubelet[2449]: E1212 18:19:14.318462 2449 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://64.23.253.31:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 64.23.253.31:6443: connect: connection refused" logger="UnhandledError" Dec 12 18:19:14.320018 kubelet[2449]: I1212 18:19:14.319408 2449 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 12 18:19:14.322541 kubelet[2449]: W1212 18:19:14.321379 2449 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 12 18:19:14.322541 kubelet[2449]: W1212 18:19:14.321983 2449 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://64.23.253.31:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4515.1.0-f-8be9c60ab1&limit=500&resourceVersion=0": dial tcp 64.23.253.31:6443: connect: connection refused Dec 12 18:19:14.322541 kubelet[2449]: E1212 18:19:14.322074 2449 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://64.23.253.31:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4515.1.0-f-8be9c60ab1&limit=500&resourceVersion=0\": dial tcp 64.23.253.31:6443: connect: connection refused" logger="UnhandledError" Dec 12 18:19:14.322541 kubelet[2449]: I1212 18:19:14.322381 2449 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 12 18:19:14.322541 kubelet[2449]: I1212 18:19:14.322418 2449 server.go:1287] "Started kubelet" Dec 12 18:19:14.327526 kubelet[2449]: I1212 18:19:14.327373 2449 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 18:19:14.329884 kubelet[2449]: I1212 18:19:14.329849 2449 server.go:479] "Adding debug handlers to kubelet server" Dec 12 18:19:14.361255 kubelet[2449]: I1212 18:19:14.360988 2449 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 18:19:14.364706 kubelet[2449]: E1212 18:19:14.359171 2449 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://64.23.253.31:6443/api/v1/namespaces/default/events\": dial tcp 64.23.253.31:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4515.1.0-f-8be9c60ab1.18808ab2c3c33078 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4515.1.0-f-8be9c60ab1,UID:ci-4515.1.0-f-8be9c60ab1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4515.1.0-f-8be9c60ab1,},FirstTimestamp:2025-12-12 18:19:14.322395256 +0000 UTC m=+1.032889766,LastTimestamp:2025-12-12 18:19:14.322395256 +0000 UTC m=+1.032889766,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4515.1.0-f-8be9c60ab1,}" Dec 12 18:19:14.365922 kubelet[2449]: I1212 18:19:14.365217 2449 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 18:19:14.365922 kubelet[2449]: I1212 18:19:14.365721 2449 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 18:19:14.369193 kubelet[2449]: I1212 18:19:14.367954 2449 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 18:19:14.371584 kubelet[2449]: I1212 18:19:14.371548 2449 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 12 18:19:14.372188 kubelet[2449]: E1212 18:19:14.372148 2449 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-f-8be9c60ab1\" not found" Dec 12 18:19:14.374794 kubelet[2449]: I1212 18:19:14.372803 2449 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 12 18:19:14.374794 kubelet[2449]: I1212 18:19:14.372873 2449 reconciler.go:26] "Reconciler: start to sync state" Dec 12 18:19:14.374794 kubelet[2449]: E1212 18:19:14.373565 2449 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.253.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515.1.0-f-8be9c60ab1?timeout=10s\": dial tcp 64.23.253.31:6443: connect: connection refused" interval="200ms" Dec 12 18:19:14.374794 kubelet[2449]: W1212 18:19:14.374632 2449 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://64.23.253.31:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.23.253.31:6443: connect: connection refused Dec 12 18:19:14.374794 kubelet[2449]: E1212 18:19:14.374752 2449 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://64.23.253.31:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 64.23.253.31:6443: connect: connection refused" logger="UnhandledError" Dec 12 18:19:14.377447 kubelet[2449]: I1212 18:19:14.377419 2449 factory.go:221] Registration of the systemd container factory successfully Dec 12 18:19:14.377719 kubelet[2449]: I1212 18:19:14.377699 2449 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 18:19:14.376000 audit[2461]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2461 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:19:14.376000 audit[2461]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff48435bd0 a2=0 a3=0 items=0 ppid=2449 pid=2461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:14.376000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 12 18:19:14.380549 kubelet[2449]: E1212 18:19:14.380516 2449 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 18:19:14.380915 kubelet[2449]: I1212 18:19:14.380741 2449 factory.go:221] Registration of the containerd container factory successfully Dec 12 18:19:14.382000 audit[2462]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2462 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:19:14.382000 audit[2462]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcde770910 a2=0 a3=0 items=0 ppid=2449 pid=2462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:14.382000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 12 18:19:14.392000 audit[2467]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2467 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:19:14.392000 audit[2467]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffe20d8a220 a2=0 a3=0 items=0 ppid=2449 pid=2467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:14.392000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 12 18:19:14.400315 kubelet[2449]: I1212 18:19:14.400235 2449 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 18:19:14.400315 kubelet[2449]: I1212 18:19:14.400262 2449 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 18:19:14.400735 kubelet[2449]: I1212 18:19:14.400292 2449 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:19:14.402000 audit[2469]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2469 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:19:14.402000 audit[2469]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffea74d97f0 a2=0 a3=0 items=0 ppid=2449 pid=2469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:14.402000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 12 18:19:14.404971 kubelet[2449]: I1212 18:19:14.404539 2449 policy_none.go:49] "None policy: Start" Dec 12 18:19:14.404971 kubelet[2449]: I1212 18:19:14.404571 2449 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 12 18:19:14.404971 kubelet[2449]: I1212 18:19:14.404680 2449 state_mem.go:35] "Initializing new in-memory state store" Dec 12 18:19:14.413000 audit[2472]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2472 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:19:14.413000 audit[2472]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffdbe931540 a2=0 a3=0 items=0 ppid=2449 pid=2472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:14.413000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Dec 12 18:19:14.415000 audit[2473]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2473 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:19:14.415000 audit[2473]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe44961d30 a2=0 a3=0 items=0 ppid=2449 pid=2473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:14.415000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 12 18:19:14.417494 kubelet[2449]: I1212 18:19:14.415271 2449 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 12 18:19:14.417494 kubelet[2449]: I1212 18:19:14.417267 2449 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 12 18:19:14.417494 kubelet[2449]: I1212 18:19:14.417305 2449 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 12 18:19:14.417494 kubelet[2449]: I1212 18:19:14.417336 2449 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 18:19:14.417494 kubelet[2449]: I1212 18:19:14.417346 2449 kubelet.go:2382] "Starting kubelet main sync loop" Dec 12 18:19:14.417653 kubelet[2449]: E1212 18:19:14.417461 2449 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 18:19:14.418000 audit[2474]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2474 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:19:14.418000 audit[2474]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc9cbf4ed0 a2=0 a3=0 items=0 ppid=2449 pid=2474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:14.418000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 12 18:19:14.419000 audit[2475]: NETFILTER_CFG table=mangle:49 family=10 entries=1 op=nft_register_chain pid=2475 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:19:14.419000 audit[2475]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff2c658590 a2=0 a3=0 items=0 ppid=2449 pid=2475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:14.419000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 12 18:19:14.421000 audit[2476]: NETFILTER_CFG table=nat:50 family=2 entries=1 op=nft_register_chain pid=2476 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:19:14.421000 audit[2477]: NETFILTER_CFG table=nat:51 family=10 entries=1 op=nft_register_chain pid=2477 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:19:14.421000 audit[2477]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffd1e81e40 a2=0 a3=0 items=0 ppid=2449 pid=2477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:14.421000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 12 18:19:14.421000 audit[2476]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff2e5548f0 a2=0 a3=0 items=0 ppid=2449 pid=2476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:14.421000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 12 18:19:14.422000 audit[2478]: NETFILTER_CFG table=filter:52 family=10 entries=1 op=nft_register_chain pid=2478 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:19:14.422000 audit[2478]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffc125e9b0 a2=0 a3=0 items=0 ppid=2449 pid=2478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:14.422000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 12 18:19:14.426000 audit[2480]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_chain pid=2480 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:19:14.426000 audit[2480]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff6d0c5760 a2=0 a3=0 items=0 ppid=2449 pid=2480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:14.426000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 12 18:19:14.432352 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 12 18:19:14.435926 kubelet[2449]: W1212 18:19:14.435842 2449 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://64.23.253.31:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.23.253.31:6443: connect: connection refused Dec 12 18:19:14.435926 kubelet[2449]: E1212 18:19:14.435918 2449 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://64.23.253.31:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 64.23.253.31:6443: connect: connection refused" logger="UnhandledError" Dec 12 18:19:14.456875 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 12 18:19:14.462176 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 12 18:19:14.475406 kubelet[2449]: E1212 18:19:14.475348 2449 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-f-8be9c60ab1\" not found" Dec 12 18:19:14.479067 kubelet[2449]: I1212 18:19:14.478986 2449 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 12 18:19:14.479324 kubelet[2449]: I1212 18:19:14.479275 2449 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 18:19:14.479514 kubelet[2449]: I1212 18:19:14.479299 2449 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 18:19:14.485455 kubelet[2449]: E1212 18:19:14.485413 2449 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 18:19:14.485632 kubelet[2449]: E1212 18:19:14.485473 2449 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4515.1.0-f-8be9c60ab1\" not found" Dec 12 18:19:14.488232 kubelet[2449]: I1212 18:19:14.488179 2449 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 18:19:14.531139 systemd[1]: Created slice kubepods-burstable-pod9b73bd2f6f26f0527c7e5469e2fb4776.slice - libcontainer container kubepods-burstable-pod9b73bd2f6f26f0527c7e5469e2fb4776.slice. Dec 12 18:19:14.555570 kubelet[2449]: E1212 18:19:14.555173 2449 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-f-8be9c60ab1\" not found" node="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:19:14.561410 systemd[1]: Created slice kubepods-burstable-pod9797f0dda91eed814a7e9efb94450320.slice - libcontainer container kubepods-burstable-pod9797f0dda91eed814a7e9efb94450320.slice. Dec 12 18:19:14.565237 kubelet[2449]: E1212 18:19:14.565201 2449 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-f-8be9c60ab1\" not found" node="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:19:14.571092 systemd[1]: Created slice kubepods-burstable-pod63cd94081228352e06afffa711d03c88.slice - libcontainer container kubepods-burstable-pod63cd94081228352e06afffa711d03c88.slice. Dec 12 18:19:14.576375 kubelet[2449]: I1212 18:19:14.575537 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/63cd94081228352e06afffa711d03c88-ca-certs\") pod \"kube-controller-manager-ci-4515.1.0-f-8be9c60ab1\" (UID: \"63cd94081228352e06afffa711d03c88\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-f-8be9c60ab1" Dec 12 18:19:14.576375 kubelet[2449]: I1212 18:19:14.575580 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/63cd94081228352e06afffa711d03c88-flexvolume-dir\") pod \"kube-controller-manager-ci-4515.1.0-f-8be9c60ab1\" (UID: \"63cd94081228352e06afffa711d03c88\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-f-8be9c60ab1" Dec 12 18:19:14.576375 kubelet[2449]: I1212 18:19:14.575606 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/63cd94081228352e06afffa711d03c88-kubeconfig\") pod \"kube-controller-manager-ci-4515.1.0-f-8be9c60ab1\" (UID: \"63cd94081228352e06afffa711d03c88\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-f-8be9c60ab1" Dec 12 18:19:14.576375 kubelet[2449]: I1212 18:19:14.575630 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/63cd94081228352e06afffa711d03c88-k8s-certs\") pod \"kube-controller-manager-ci-4515.1.0-f-8be9c60ab1\" (UID: \"63cd94081228352e06afffa711d03c88\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-f-8be9c60ab1" Dec 12 18:19:14.576375 kubelet[2449]: I1212 18:19:14.575654 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/63cd94081228352e06afffa711d03c88-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4515.1.0-f-8be9c60ab1\" (UID: \"63cd94081228352e06afffa711d03c88\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-f-8be9c60ab1" Dec 12 18:19:14.576832 kubelet[2449]: I1212 18:19:14.575678 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9797f0dda91eed814a7e9efb94450320-kubeconfig\") pod \"kube-scheduler-ci-4515.1.0-f-8be9c60ab1\" (UID: \"9797f0dda91eed814a7e9efb94450320\") " pod="kube-system/kube-scheduler-ci-4515.1.0-f-8be9c60ab1" Dec 12 18:19:14.576832 kubelet[2449]: I1212 18:19:14.575702 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b73bd2f6f26f0527c7e5469e2fb4776-ca-certs\") pod \"kube-apiserver-ci-4515.1.0-f-8be9c60ab1\" (UID: \"9b73bd2f6f26f0527c7e5469e2fb4776\") " pod="kube-system/kube-apiserver-ci-4515.1.0-f-8be9c60ab1" Dec 12 18:19:14.576832 kubelet[2449]: I1212 18:19:14.575723 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b73bd2f6f26f0527c7e5469e2fb4776-k8s-certs\") pod \"kube-apiserver-ci-4515.1.0-f-8be9c60ab1\" (UID: \"9b73bd2f6f26f0527c7e5469e2fb4776\") " pod="kube-system/kube-apiserver-ci-4515.1.0-f-8be9c60ab1" Dec 12 18:19:14.576832 kubelet[2449]: I1212 18:19:14.575749 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b73bd2f6f26f0527c7e5469e2fb4776-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4515.1.0-f-8be9c60ab1\" (UID: \"9b73bd2f6f26f0527c7e5469e2fb4776\") " pod="kube-system/kube-apiserver-ci-4515.1.0-f-8be9c60ab1" Dec 12 18:19:14.577508 kubelet[2449]: E1212 18:19:14.577442 2449 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.253.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515.1.0-f-8be9c60ab1?timeout=10s\": dial tcp 64.23.253.31:6443: connect: connection refused" interval="400ms" Dec 12 18:19:14.578430 kubelet[2449]: E1212 18:19:14.578400 2449 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-f-8be9c60ab1\" not found" node="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:19:14.582034 kubelet[2449]: I1212 18:19:14.581958 2449 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:19:14.582448 kubelet[2449]: E1212 18:19:14.582406 2449 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://64.23.253.31:6443/api/v1/nodes\": dial tcp 64.23.253.31:6443: connect: connection refused" node="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:19:14.784243 kubelet[2449]: I1212 18:19:14.784190 2449 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:19:14.784941 kubelet[2449]: E1212 18:19:14.784888 2449 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://64.23.253.31:6443/api/v1/nodes\": dial tcp 64.23.253.31:6443: connect: connection refused" node="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:19:14.856677 kubelet[2449]: E1212 18:19:14.856272 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 12 18:19:14.857694 containerd[1616]: time="2025-12-12T18:19:14.857445557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4515.1.0-f-8be9c60ab1,Uid:9b73bd2f6f26f0527c7e5469e2fb4776,Namespace:kube-system,Attempt:0,}" Dec 12 18:19:14.866524 kubelet[2449]: E1212 18:19:14.865966 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 12 18:19:14.866986 containerd[1616]: time="2025-12-12T18:19:14.866934286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4515.1.0-f-8be9c60ab1,Uid:9797f0dda91eed814a7e9efb94450320,Namespace:kube-system,Attempt:0,}" Dec 12 18:19:14.883424 kubelet[2449]: E1212 18:19:14.882495 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 12 18:19:14.883954 containerd[1616]: time="2025-12-12T18:19:14.883853328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4515.1.0-f-8be9c60ab1,Uid:63cd94081228352e06afffa711d03c88,Namespace:kube-system,Attempt:0,}" Dec 12 18:19:14.979467 kubelet[2449]: E1212 18:19:14.978761 2449 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.253.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515.1.0-f-8be9c60ab1?timeout=10s\": dial tcp 64.23.253.31:6443: connect: connection refused" interval="800ms" Dec 12 18:19:15.086867 containerd[1616]: time="2025-12-12T18:19:15.086796409Z" level=info msg="connecting to shim 4a148afcf40a202a9f2a287acbcb5049fa31d424fa090e962a85b8ef67239fd8" address="unix:///run/containerd/s/2abd602228e98a7f869965f57d384099ef002011b4bbf8afabd389b740856442" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:19:15.091559 containerd[1616]: time="2025-12-12T18:19:15.090694030Z" level=info msg="connecting to shim 0d14bdac7877e6102441ec4ea7d0dd1bf72e1e5512240324b3e45b142b5d2164" address="unix:///run/containerd/s/c51bc7e89cfaee417f70d5f15daa0ed6f9981005395c4c006e3bde08507bed34" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:19:15.099822 containerd[1616]: time="2025-12-12T18:19:15.099771004Z" level=info msg="connecting to shim 21291356463ed318d794cfebe5a718a2c96d7bf2e67c2ac8c467bf5c7764373b" address="unix:///run/containerd/s/e3aae2e97da2e9ee3e2f1b0752a1185a350165b09bf7caad7a2b1b33c4af83b4" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:19:15.189822 kubelet[2449]: I1212 18:19:15.188655 2449 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:19:15.189822 kubelet[2449]: E1212 18:19:15.189569 2449 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://64.23.253.31:6443/api/v1/nodes\": dial tcp 64.23.253.31:6443: connect: connection refused" node="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:19:15.223308 systemd[1]: Started cri-containerd-0d14bdac7877e6102441ec4ea7d0dd1bf72e1e5512240324b3e45b142b5d2164.scope - libcontainer container 0d14bdac7877e6102441ec4ea7d0dd1bf72e1e5512240324b3e45b142b5d2164. Dec 12 18:19:15.227781 systemd[1]: Started cri-containerd-4a148afcf40a202a9f2a287acbcb5049fa31d424fa090e962a85b8ef67239fd8.scope - libcontainer container 4a148afcf40a202a9f2a287acbcb5049fa31d424fa090e962a85b8ef67239fd8. Dec 12 18:19:15.242854 systemd[1]: Started cri-containerd-21291356463ed318d794cfebe5a718a2c96d7bf2e67c2ac8c467bf5c7764373b.scope - libcontainer container 21291356463ed318d794cfebe5a718a2c96d7bf2e67c2ac8c467bf5c7764373b. Dec 12 18:19:15.272000 audit: BPF prog-id=81 op=LOAD Dec 12 18:19:15.273000 audit: BPF prog-id=82 op=LOAD Dec 12 18:19:15.273000 audit[2543]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=2504 pid=2543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:15.273000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461313438616663663430613230326139663261323837616362636235 Dec 12 18:19:15.273000 audit: BPF prog-id=82 op=UNLOAD Dec 12 18:19:15.273000 audit[2543]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2504 pid=2543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:15.273000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461313438616663663430613230326139663261323837616362636235 Dec 12 18:19:15.274000 audit: BPF prog-id=83 op=LOAD Dec 12 18:19:15.274000 audit[2543]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=2504 pid=2543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:15.274000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461313438616663663430613230326139663261323837616362636235 Dec 12 18:19:15.274000 audit: BPF prog-id=84 op=LOAD Dec 12 18:19:15.274000 audit[2543]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=2504 pid=2543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:15.274000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461313438616663663430613230326139663261323837616362636235 Dec 12 18:19:15.274000 audit: BPF prog-id=84 op=UNLOAD Dec 12 18:19:15.274000 audit[2543]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2504 pid=2543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:15.274000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461313438616663663430613230326139663261323837616362636235 Dec 12 18:19:15.274000 audit: BPF prog-id=83 op=UNLOAD Dec 12 18:19:15.274000 audit[2543]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2504 pid=2543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:15.274000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461313438616663663430613230326139663261323837616362636235 Dec 12 18:19:15.274000 audit: BPF prog-id=85 op=LOAD Dec 12 18:19:15.274000 audit[2543]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=2504 pid=2543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:15.274000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461313438616663663430613230326139663261323837616362636235 Dec 12 18:19:15.279000 audit: BPF prog-id=86 op=LOAD Dec 12 18:19:15.280000 audit: BPF prog-id=87 op=LOAD Dec 12 18:19:15.280000 audit[2531]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2506 pid=2531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:15.280000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064313462646163373837376536313032343431656334656137643064 Dec 12 18:19:15.280000 audit: BPF prog-id=87 op=UNLOAD Dec 12 18:19:15.280000 audit[2531]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2506 pid=2531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:15.280000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064313462646163373837376536313032343431656334656137643064 Dec 12 18:19:15.281000 audit: BPF prog-id=88 op=LOAD Dec 12 18:19:15.281000 audit[2531]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2506 pid=2531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:15.281000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064313462646163373837376536313032343431656334656137643064 Dec 12 18:19:15.281000 audit: BPF prog-id=89 op=LOAD Dec 12 18:19:15.281000 audit[2531]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2506 pid=2531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:15.281000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064313462646163373837376536313032343431656334656137643064 Dec 12 18:19:15.282000 audit: BPF prog-id=89 op=UNLOAD Dec 12 18:19:15.282000 audit[2531]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2506 pid=2531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:15.282000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064313462646163373837376536313032343431656334656137643064 Dec 12 18:19:15.282000 audit: BPF prog-id=88 op=UNLOAD Dec 12 18:19:15.282000 audit[2531]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2506 pid=2531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:15.282000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064313462646163373837376536313032343431656334656137643064 Dec 12 18:19:15.282000 audit: BPF prog-id=90 op=LOAD Dec 12 18:19:15.282000 audit[2531]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2506 pid=2531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:15.282000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064313462646163373837376536313032343431656334656137643064 Dec 12 18:19:15.294000 audit: BPF prog-id=91 op=LOAD Dec 12 18:19:15.296000 audit: BPF prog-id=92 op=LOAD Dec 12 18:19:15.296000 audit[2538]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=2516 pid=2538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:15.296000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231323931333536343633656433313864373934636665626535613731 Dec 12 18:19:15.297000 audit: BPF prog-id=92 op=UNLOAD Dec 12 18:19:15.297000 audit[2538]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2516 pid=2538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:15.297000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231323931333536343633656433313864373934636665626535613731 Dec 12 18:19:15.297000 audit: BPF prog-id=93 op=LOAD Dec 12 18:19:15.297000 audit[2538]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=2516 pid=2538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:15.297000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231323931333536343633656433313864373934636665626535613731 Dec 12 18:19:15.297000 audit: BPF prog-id=94 op=LOAD Dec 12 18:19:15.297000 audit[2538]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=2516 pid=2538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:15.297000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231323931333536343633656433313864373934636665626535613731 Dec 12 18:19:15.299000 audit: BPF prog-id=94 op=UNLOAD Dec 12 18:19:15.299000 audit[2538]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2516 pid=2538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:15.299000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231323931333536343633656433313864373934636665626535613731 Dec 12 18:19:15.299000 audit: BPF prog-id=93 op=UNLOAD Dec 12 18:19:15.299000 audit[2538]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2516 pid=2538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:15.299000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231323931333536343633656433313864373934636665626535613731 Dec 12 18:19:15.299000 audit: BPF prog-id=95 op=LOAD Dec 12 18:19:15.299000 audit[2538]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=2516 pid=2538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:15.299000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231323931333536343633656433313864373934636665626535613731 Dec 12 18:19:15.347384 kubelet[2449]: W1212 18:19:15.346445 2449 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://64.23.253.31:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4515.1.0-f-8be9c60ab1&limit=500&resourceVersion=0": dial tcp 64.23.253.31:6443: connect: connection refused Dec 12 18:19:15.347384 kubelet[2449]: E1212 18:19:15.346826 2449 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://64.23.253.31:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4515.1.0-f-8be9c60ab1&limit=500&resourceVersion=0\": dial tcp 64.23.253.31:6443: connect: connection refused" logger="UnhandledError" Dec 12 18:19:15.378918 containerd[1616]: time="2025-12-12T18:19:15.378676618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4515.1.0-f-8be9c60ab1,Uid:9b73bd2f6f26f0527c7e5469e2fb4776,Namespace:kube-system,Attempt:0,} returns sandbox id \"4a148afcf40a202a9f2a287acbcb5049fa31d424fa090e962a85b8ef67239fd8\"" Dec 12 18:19:15.383977 kubelet[2449]: E1212 18:19:15.383900 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 12 18:19:15.388562 containerd[1616]: time="2025-12-12T18:19:15.388414555Z" level=info msg="CreateContainer within sandbox \"4a148afcf40a202a9f2a287acbcb5049fa31d424fa090e962a85b8ef67239fd8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 12 18:19:15.398146 containerd[1616]: time="2025-12-12T18:19:15.398064524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4515.1.0-f-8be9c60ab1,Uid:63cd94081228352e06afffa711d03c88,Namespace:kube-system,Attempt:0,} returns sandbox id \"0d14bdac7877e6102441ec4ea7d0dd1bf72e1e5512240324b3e45b142b5d2164\"" Dec 12 18:19:15.399255 kubelet[2449]: E1212 18:19:15.399019 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 12 18:19:15.404800 containerd[1616]: time="2025-12-12T18:19:15.404710869Z" level=info msg="CreateContainer within sandbox \"0d14bdac7877e6102441ec4ea7d0dd1bf72e1e5512240324b3e45b142b5d2164\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 12 18:19:15.427975 containerd[1616]: time="2025-12-12T18:19:15.427916793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4515.1.0-f-8be9c60ab1,Uid:9797f0dda91eed814a7e9efb94450320,Namespace:kube-system,Attempt:0,} returns sandbox id \"21291356463ed318d794cfebe5a718a2c96d7bf2e67c2ac8c467bf5c7764373b\"" Dec 12 18:19:15.429897 kubelet[2449]: E1212 18:19:15.429626 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 12 18:19:15.432533 containerd[1616]: time="2025-12-12T18:19:15.432449675Z" level=info msg="Container 81dbfb5d750cc0380aa6387e556097e0366a0a189c278f5fb14946a2d6689ee0: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:19:15.435194 containerd[1616]: time="2025-12-12T18:19:15.434989532Z" level=info msg="CreateContainer within sandbox \"21291356463ed318d794cfebe5a718a2c96d7bf2e67c2ac8c467bf5c7764373b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 12 18:19:15.436527 containerd[1616]: time="2025-12-12T18:19:15.436277308Z" level=info msg="Container 309817569fa1d77fbdb4f8fb4af3e2673fbd491879fc2457cbd24d49fa313525: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:19:15.458908 containerd[1616]: time="2025-12-12T18:19:15.458296344Z" level=info msg="CreateContainer within sandbox \"0d14bdac7877e6102441ec4ea7d0dd1bf72e1e5512240324b3e45b142b5d2164\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"309817569fa1d77fbdb4f8fb4af3e2673fbd491879fc2457cbd24d49fa313525\"" Dec 12 18:19:15.462499 containerd[1616]: time="2025-12-12T18:19:15.462403921Z" level=info msg="StartContainer for \"309817569fa1d77fbdb4f8fb4af3e2673fbd491879fc2457cbd24d49fa313525\"" Dec 12 18:19:15.465087 containerd[1616]: time="2025-12-12T18:19:15.465015872Z" level=info msg="connecting to shim 309817569fa1d77fbdb4f8fb4af3e2673fbd491879fc2457cbd24d49fa313525" address="unix:///run/containerd/s/c51bc7e89cfaee417f70d5f15daa0ed6f9981005395c4c006e3bde08507bed34" protocol=ttrpc version=3 Dec 12 18:19:15.465656 containerd[1616]: time="2025-12-12T18:19:15.465060985Z" level=info msg="Container 840b8b6ba9bb282046fd7f626e9cd9f88886469f9b0d8f81159a78ba75ad62cb: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:19:15.477304 containerd[1616]: time="2025-12-12T18:19:15.477247875Z" level=info msg="CreateContainer within sandbox \"4a148afcf40a202a9f2a287acbcb5049fa31d424fa090e962a85b8ef67239fd8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"81dbfb5d750cc0380aa6387e556097e0366a0a189c278f5fb14946a2d6689ee0\"" Dec 12 18:19:15.478895 containerd[1616]: time="2025-12-12T18:19:15.478848582Z" level=info msg="StartContainer for \"81dbfb5d750cc0380aa6387e556097e0366a0a189c278f5fb14946a2d6689ee0\"" Dec 12 18:19:15.482336 containerd[1616]: time="2025-12-12T18:19:15.482283929Z" level=info msg="connecting to shim 81dbfb5d750cc0380aa6387e556097e0366a0a189c278f5fb14946a2d6689ee0" address="unix:///run/containerd/s/2abd602228e98a7f869965f57d384099ef002011b4bbf8afabd389b740856442" protocol=ttrpc version=3 Dec 12 18:19:15.485901 containerd[1616]: time="2025-12-12T18:19:15.485766436Z" level=info msg="CreateContainer within sandbox \"21291356463ed318d794cfebe5a718a2c96d7bf2e67c2ac8c467bf5c7764373b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"840b8b6ba9bb282046fd7f626e9cd9f88886469f9b0d8f81159a78ba75ad62cb\"" Dec 12 18:19:15.489082 containerd[1616]: time="2025-12-12T18:19:15.489043626Z" level=info msg="StartContainer for \"840b8b6ba9bb282046fd7f626e9cd9f88886469f9b0d8f81159a78ba75ad62cb\"" Dec 12 18:19:15.491815 containerd[1616]: time="2025-12-12T18:19:15.491714434Z" level=info msg="connecting to shim 840b8b6ba9bb282046fd7f626e9cd9f88886469f9b0d8f81159a78ba75ad62cb" address="unix:///run/containerd/s/e3aae2e97da2e9ee3e2f1b0752a1185a350165b09bf7caad7a2b1b33c4af83b4" protocol=ttrpc version=3 Dec 12 18:19:15.498802 systemd[1]: Started cri-containerd-309817569fa1d77fbdb4f8fb4af3e2673fbd491879fc2457cbd24d49fa313525.scope - libcontainer container 309817569fa1d77fbdb4f8fb4af3e2673fbd491879fc2457cbd24d49fa313525. Dec 12 18:19:15.528914 systemd[1]: Started cri-containerd-81dbfb5d750cc0380aa6387e556097e0366a0a189c278f5fb14946a2d6689ee0.scope - libcontainer container 81dbfb5d750cc0380aa6387e556097e0366a0a189c278f5fb14946a2d6689ee0. Dec 12 18:19:15.538893 systemd[1]: Started cri-containerd-840b8b6ba9bb282046fd7f626e9cd9f88886469f9b0d8f81159a78ba75ad62cb.scope - libcontainer container 840b8b6ba9bb282046fd7f626e9cd9f88886469f9b0d8f81159a78ba75ad62cb. Dec 12 18:19:15.543000 audit: BPF prog-id=96 op=LOAD Dec 12 18:19:15.544000 audit: BPF prog-id=97 op=LOAD Dec 12 18:19:15.544000 audit[2617]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c238 a2=98 a3=0 items=0 ppid=2506 pid=2617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:15.544000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330393831373536396661316437376662646234663866623461663365 Dec 12 18:19:15.544000 audit: BPF prog-id=97 op=UNLOAD Dec 12 18:19:15.544000 audit[2617]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2506 pid=2617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:15.544000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330393831373536396661316437376662646234663866623461663365 Dec 12 18:19:15.545000 audit: BPF prog-id=98 op=LOAD Dec 12 18:19:15.545000 audit[2617]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c488 a2=98 a3=0 items=0 ppid=2506 pid=2617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:15.545000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330393831373536396661316437376662646234663866623461663365 Dec 12 18:19:15.545000 audit: BPF prog-id=99 op=LOAD Dec 12 18:19:15.545000 audit[2617]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00010c218 a2=98 a3=0 items=0 ppid=2506 pid=2617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:15.545000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330393831373536396661316437376662646234663866623461663365 Dec 12 18:19:15.545000 audit: BPF prog-id=99 op=UNLOAD Dec 12 18:19:15.545000 audit[2617]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2506 pid=2617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:15.545000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330393831373536396661316437376662646234663866623461663365 Dec 12 18:19:15.545000 audit: BPF prog-id=98 op=UNLOAD Dec 12 18:19:15.545000 audit[2617]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2506 pid=2617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:15.545000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330393831373536396661316437376662646234663866623461663365 Dec 12 18:19:15.545000 audit: BPF prog-id=100 op=LOAD Dec 12 18:19:15.545000 audit[2617]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c6e8 a2=98 a3=0 items=0 ppid=2506 pid=2617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:15.545000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330393831373536396661316437376662646234663866623461663365 Dec 12 18:19:15.568000 audit: BPF prog-id=101 op=LOAD Dec 12 18:19:15.570000 audit: BPF prog-id=102 op=LOAD Dec 12 18:19:15.570000 audit[2630]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=2516 pid=2630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:15.570000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834306238623662613962623238323034366664376636323665396364 Dec 12 18:19:15.572000 audit: BPF prog-id=102 op=UNLOAD Dec 12 18:19:15.572000 audit[2630]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2516 pid=2630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:15.572000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834306238623662613962623238323034366664376636323665396364 Dec 12 18:19:15.573000 audit: BPF prog-id=103 op=LOAD Dec 12 18:19:15.573000 audit[2630]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=2516 pid=2630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:15.573000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834306238623662613962623238323034366664376636323665396364 Dec 12 18:19:15.574000 audit: BPF prog-id=104 op=LOAD Dec 12 18:19:15.574000 audit[2630]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=2516 pid=2630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:15.574000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834306238623662613962623238323034366664376636323665396364 Dec 12 18:19:15.575000 audit: BPF prog-id=104 op=UNLOAD Dec 12 18:19:15.575000 audit[2630]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2516 pid=2630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:15.575000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834306238623662613962623238323034366664376636323665396364 Dec 12 18:19:15.575000 audit: BPF prog-id=103 op=UNLOAD Dec 12 18:19:15.575000 audit[2630]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2516 pid=2630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:15.575000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834306238623662613962623238323034366664376636323665396364 Dec 12 18:19:15.576000 audit: BPF prog-id=105 op=LOAD Dec 12 18:19:15.576000 audit[2630]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=2516 pid=2630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:15.576000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834306238623662613962623238323034366664376636323665396364 Dec 12 18:19:15.584000 audit: BPF prog-id=106 op=LOAD Dec 12 18:19:15.585000 audit: BPF prog-id=107 op=LOAD Dec 12 18:19:15.585000 audit[2629]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2504 pid=2629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:15.585000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831646266623564373530636330333830616136333837653535363039 Dec 12 18:19:15.585000 audit: BPF prog-id=107 op=UNLOAD Dec 12 18:19:15.585000 audit[2629]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2504 pid=2629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:15.585000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831646266623564373530636330333830616136333837653535363039 Dec 12 18:19:15.586000 audit: BPF prog-id=108 op=LOAD Dec 12 18:19:15.586000 audit[2629]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2504 pid=2629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:15.586000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831646266623564373530636330333830616136333837653535363039 Dec 12 18:19:15.587000 audit: BPF prog-id=109 op=LOAD Dec 12 18:19:15.587000 audit[2629]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2504 pid=2629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:15.587000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831646266623564373530636330333830616136333837653535363039 Dec 12 18:19:15.587000 audit: BPF prog-id=109 op=UNLOAD Dec 12 18:19:15.587000 audit[2629]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2504 pid=2629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:15.587000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831646266623564373530636330333830616136333837653535363039 Dec 12 18:19:15.587000 audit: BPF prog-id=108 op=UNLOAD Dec 12 18:19:15.587000 audit[2629]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2504 pid=2629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:15.587000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831646266623564373530636330333830616136333837653535363039 Dec 12 18:19:15.587000 audit: BPF prog-id=110 op=LOAD Dec 12 18:19:15.587000 audit[2629]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2504 pid=2629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:15.587000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831646266623564373530636330333830616136333837653535363039 Dec 12 18:19:15.662933 containerd[1616]: time="2025-12-12T18:19:15.662732196Z" level=info msg="StartContainer for \"309817569fa1d77fbdb4f8fb4af3e2673fbd491879fc2457cbd24d49fa313525\" returns successfully" Dec 12 18:19:15.706307 containerd[1616]: time="2025-12-12T18:19:15.706174954Z" level=info msg="StartContainer for \"840b8b6ba9bb282046fd7f626e9cd9f88886469f9b0d8f81159a78ba75ad62cb\" returns successfully" Dec 12 18:19:15.721962 containerd[1616]: time="2025-12-12T18:19:15.721499189Z" level=info msg="StartContainer for \"81dbfb5d750cc0380aa6387e556097e0366a0a189c278f5fb14946a2d6689ee0\" returns successfully" Dec 12 18:19:15.779708 kubelet[2449]: E1212 18:19:15.779624 2449 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.253.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515.1.0-f-8be9c60ab1?timeout=10s\": dial tcp 64.23.253.31:6443: connect: connection refused" interval="1.6s" Dec 12 18:19:15.819304 kubelet[2449]: W1212 18:19:15.819063 2449 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://64.23.253.31:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.23.253.31:6443: connect: connection refused Dec 12 18:19:15.819670 kubelet[2449]: E1212 18:19:15.819167 2449 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://64.23.253.31:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 64.23.253.31:6443: connect: connection refused" logger="UnhandledError" Dec 12 18:19:15.839522 kubelet[2449]: W1212 18:19:15.837990 2449 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://64.23.253.31:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 64.23.253.31:6443: connect: connection refused Dec 12 18:19:15.839522 kubelet[2449]: E1212 18:19:15.838108 2449 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://64.23.253.31:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 64.23.253.31:6443: connect: connection refused" logger="UnhandledError" Dec 12 18:19:15.887463 kubelet[2449]: W1212 18:19:15.887028 2449 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://64.23.253.31:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.23.253.31:6443: connect: connection refused Dec 12 18:19:15.887463 kubelet[2449]: E1212 18:19:15.887135 2449 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://64.23.253.31:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 64.23.253.31:6443: connect: connection refused" logger="UnhandledError" Dec 12 18:19:15.996645 kubelet[2449]: I1212 18:19:15.995570 2449 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:19:15.998881 kubelet[2449]: E1212 18:19:15.998819 2449 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://64.23.253.31:6443/api/v1/nodes\": dial tcp 64.23.253.31:6443: connect: connection refused" node="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:19:16.487818 kubelet[2449]: E1212 18:19:16.486858 2449 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-f-8be9c60ab1\" not found" node="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:19:16.487818 kubelet[2449]: E1212 18:19:16.487083 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 12 18:19:16.495884 kubelet[2449]: E1212 18:19:16.495728 2449 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-f-8be9c60ab1\" not found" node="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:19:16.498536 kubelet[2449]: E1212 18:19:16.497608 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 12 18:19:16.499829 kubelet[2449]: E1212 18:19:16.499796 2449 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-f-8be9c60ab1\" not found" node="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:19:16.500215 kubelet[2449]: E1212 18:19:16.500192 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 12 18:19:17.501582 kubelet[2449]: E1212 18:19:17.501017 2449 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-f-8be9c60ab1\" not found" node="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:19:17.501582 kubelet[2449]: E1212 18:19:17.501159 2449 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-f-8be9c60ab1\" not found" node="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:19:17.501582 kubelet[2449]: E1212 18:19:17.501208 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 12 18:19:17.501582 kubelet[2449]: E1212 18:19:17.501276 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 12 18:19:17.502969 kubelet[2449]: E1212 18:19:17.502936 2449 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-f-8be9c60ab1\" not found" node="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:19:17.503499 kubelet[2449]: E1212 18:19:17.503444 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 12 18:19:17.601166 kubelet[2449]: I1212 18:19:17.601127 2449 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:19:18.505387 kubelet[2449]: E1212 18:19:18.505264 2449 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-f-8be9c60ab1\" not found" node="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:19:18.508146 kubelet[2449]: E1212 18:19:18.505472 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 12 18:19:18.519301 kubelet[2449]: E1212 18:19:18.519077 2449 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-f-8be9c60ab1\" not found" node="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:19:18.521418 kubelet[2449]: E1212 18:19:18.521346 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 12 18:19:18.576679 kubelet[2449]: E1212 18:19:18.576631 2449 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4515.1.0-f-8be9c60ab1\" not found" node="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:19:18.666778 kubelet[2449]: I1212 18:19:18.666713 2449 kubelet_node_status.go:78] "Successfully registered node" node="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:19:18.672978 kubelet[2449]: I1212 18:19:18.672923 2449 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4515.1.0-f-8be9c60ab1" Dec 12 18:19:18.702089 kubelet[2449]: E1212 18:19:18.701819 2449 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4515.1.0-f-8be9c60ab1\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4515.1.0-f-8be9c60ab1" Dec 12 18:19:18.702089 kubelet[2449]: I1212 18:19:18.701862 2449 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4515.1.0-f-8be9c60ab1" Dec 12 18:19:18.707726 kubelet[2449]: E1212 18:19:18.707078 2449 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4515.1.0-f-8be9c60ab1\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4515.1.0-f-8be9c60ab1" Dec 12 18:19:18.707726 kubelet[2449]: I1212 18:19:18.707119 2449 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4515.1.0-f-8be9c60ab1" Dec 12 18:19:18.710353 kubelet[2449]: E1212 18:19:18.710305 2449 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4515.1.0-f-8be9c60ab1\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4515.1.0-f-8be9c60ab1" Dec 12 18:19:19.286385 kubelet[2449]: I1212 18:19:19.286278 2449 apiserver.go:52] "Watching apiserver" Dec 12 18:19:19.375303 kubelet[2449]: I1212 18:19:19.375228 2449 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 12 18:19:19.508385 kubelet[2449]: I1212 18:19:19.507659 2449 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4515.1.0-f-8be9c60ab1" Dec 12 18:19:19.508385 kubelet[2449]: I1212 18:19:19.507842 2449 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4515.1.0-f-8be9c60ab1" Dec 12 18:19:19.516059 kubelet[2449]: W1212 18:19:19.516016 2449 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 12 18:19:19.516385 kubelet[2449]: E1212 18:19:19.516336 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 12 18:19:19.516615 kubelet[2449]: W1212 18:19:19.516444 2449 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 12 18:19:19.517673 kubelet[2449]: E1212 18:19:19.517641 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 12 18:19:20.510838 kubelet[2449]: E1212 18:19:20.510745 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 12 18:19:20.511761 kubelet[2449]: E1212 18:19:20.511546 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 12 18:19:20.794066 systemd[1]: Reload requested from client PID 2722 ('systemctl') (unit session-9.scope)... Dec 12 18:19:20.794093 systemd[1]: Reloading... Dec 12 18:19:20.956535 zram_generator::config[2768]: No configuration found. Dec 12 18:19:21.476562 systemd[1]: Reloading finished in 681 ms. Dec 12 18:19:21.529544 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:19:21.550244 systemd[1]: kubelet.service: Deactivated successfully. Dec 12 18:19:21.550769 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:19:21.549000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:19:21.552366 kernel: kauditd_printk_skb: 203 callbacks suppressed Dec 12 18:19:21.552620 kernel: audit: type=1131 audit(1765563561.549:394): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:19:21.557397 systemd[1]: kubelet.service: Consumed 1.405s CPU time, 128.7M memory peak. Dec 12 18:19:21.560963 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:19:21.561000 audit: BPF prog-id=111 op=LOAD Dec 12 18:19:21.565628 kernel: audit: type=1334 audit(1765563561.561:395): prog-id=111 op=LOAD Dec 12 18:19:21.572822 kernel: audit: type=1334 audit(1765563561.561:396): prog-id=77 op=UNLOAD Dec 12 18:19:21.572960 kernel: audit: type=1334 audit(1765563561.562:397): prog-id=112 op=LOAD Dec 12 18:19:21.561000 audit: BPF prog-id=77 op=UNLOAD Dec 12 18:19:21.562000 audit: BPF prog-id=112 op=LOAD Dec 12 18:19:21.562000 audit: BPF prog-id=62 op=UNLOAD Dec 12 18:19:21.577503 kernel: audit: type=1334 audit(1765563561.562:398): prog-id=62 op=UNLOAD Dec 12 18:19:21.565000 audit: BPF prog-id=113 op=LOAD Dec 12 18:19:21.565000 audit: BPF prog-id=69 op=UNLOAD Dec 12 18:19:21.579719 kernel: audit: type=1334 audit(1765563561.565:399): prog-id=113 op=LOAD Dec 12 18:19:21.579811 kernel: audit: type=1334 audit(1765563561.565:400): prog-id=69 op=UNLOAD Dec 12 18:19:21.565000 audit: BPF prog-id=114 op=LOAD Dec 12 18:19:21.565000 audit: BPF prog-id=115 op=LOAD Dec 12 18:19:21.585678 kernel: audit: type=1334 audit(1765563561.565:401): prog-id=114 op=LOAD Dec 12 18:19:21.585795 kernel: audit: type=1334 audit(1765563561.565:402): prog-id=115 op=LOAD Dec 12 18:19:21.585845 kernel: audit: type=1334 audit(1765563561.565:403): prog-id=70 op=UNLOAD Dec 12 18:19:21.565000 audit: BPF prog-id=70 op=UNLOAD Dec 12 18:19:21.565000 audit: BPF prog-id=71 op=UNLOAD Dec 12 18:19:21.567000 audit: BPF prog-id=116 op=LOAD Dec 12 18:19:21.567000 audit: BPF prog-id=63 op=UNLOAD Dec 12 18:19:21.567000 audit: BPF prog-id=117 op=LOAD Dec 12 18:19:21.567000 audit: BPF prog-id=118 op=LOAD Dec 12 18:19:21.567000 audit: BPF prog-id=64 op=UNLOAD Dec 12 18:19:21.567000 audit: BPF prog-id=65 op=UNLOAD Dec 12 18:19:21.568000 audit: BPF prog-id=119 op=LOAD Dec 12 18:19:21.568000 audit: BPF prog-id=120 op=LOAD Dec 12 18:19:21.568000 audit: BPF prog-id=72 op=UNLOAD Dec 12 18:19:21.568000 audit: BPF prog-id=73 op=UNLOAD Dec 12 18:19:21.569000 audit: BPF prog-id=121 op=LOAD Dec 12 18:19:21.569000 audit: BPF prog-id=66 op=UNLOAD Dec 12 18:19:21.569000 audit: BPF prog-id=122 op=LOAD Dec 12 18:19:21.569000 audit: BPF prog-id=123 op=LOAD Dec 12 18:19:21.569000 audit: BPF prog-id=67 op=UNLOAD Dec 12 18:19:21.569000 audit: BPF prog-id=68 op=UNLOAD Dec 12 18:19:21.573000 audit: BPF prog-id=124 op=LOAD Dec 12 18:19:21.573000 audit: BPF prog-id=74 op=UNLOAD Dec 12 18:19:21.574000 audit: BPF prog-id=125 op=LOAD Dec 12 18:19:21.574000 audit: BPF prog-id=126 op=LOAD Dec 12 18:19:21.574000 audit: BPF prog-id=75 op=UNLOAD Dec 12 18:19:21.574000 audit: BPF prog-id=76 op=UNLOAD Dec 12 18:19:21.576000 audit: BPF prog-id=127 op=LOAD Dec 12 18:19:21.586000 audit: BPF prog-id=78 op=UNLOAD Dec 12 18:19:21.586000 audit: BPF prog-id=128 op=LOAD Dec 12 18:19:21.586000 audit: BPF prog-id=129 op=LOAD Dec 12 18:19:21.586000 audit: BPF prog-id=79 op=UNLOAD Dec 12 18:19:21.586000 audit: BPF prog-id=80 op=UNLOAD Dec 12 18:19:21.589000 audit: BPF prog-id=130 op=LOAD Dec 12 18:19:21.589000 audit: BPF prog-id=61 op=UNLOAD Dec 12 18:19:21.862108 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:19:21.861000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:19:21.877335 (kubelet)[2819]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 18:19:21.966224 kubelet[2819]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 18:19:21.966224 kubelet[2819]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 18:19:21.966224 kubelet[2819]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 18:19:21.967528 kubelet[2819]: I1212 18:19:21.966972 2819 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 18:19:21.981556 kubelet[2819]: I1212 18:19:21.980454 2819 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 12 18:19:21.981556 kubelet[2819]: I1212 18:19:21.980981 2819 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 18:19:21.982118 kubelet[2819]: I1212 18:19:21.982089 2819 server.go:954] "Client rotation is on, will bootstrap in background" Dec 12 18:19:21.987416 kubelet[2819]: I1212 18:19:21.987369 2819 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 12 18:19:22.002736 kubelet[2819]: I1212 18:19:22.002680 2819 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 18:19:22.013902 kubelet[2819]: I1212 18:19:22.013308 2819 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 18:19:22.019020 kubelet[2819]: I1212 18:19:22.018786 2819 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 12 18:19:22.019177 kubelet[2819]: I1212 18:19:22.019110 2819 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 18:19:22.019458 kubelet[2819]: I1212 18:19:22.019155 2819 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4515.1.0-f-8be9c60ab1","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 18:19:22.019458 kubelet[2819]: I1212 18:19:22.019445 2819 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 18:19:22.019458 kubelet[2819]: I1212 18:19:22.019461 2819 container_manager_linux.go:304] "Creating device plugin manager" Dec 12 18:19:22.020914 kubelet[2819]: I1212 18:19:22.019547 2819 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:19:22.020914 kubelet[2819]: I1212 18:19:22.019848 2819 kubelet.go:446] "Attempting to sync node with API server" Dec 12 18:19:22.020914 kubelet[2819]: I1212 18:19:22.019875 2819 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 18:19:22.020914 kubelet[2819]: I1212 18:19:22.020575 2819 kubelet.go:352] "Adding apiserver pod source" Dec 12 18:19:22.020914 kubelet[2819]: I1212 18:19:22.020599 2819 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 18:19:22.025096 kubelet[2819]: I1212 18:19:22.025033 2819 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 12 18:19:22.027425 kubelet[2819]: I1212 18:19:22.027003 2819 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 12 18:19:22.031062 kubelet[2819]: I1212 18:19:22.030807 2819 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 12 18:19:22.031062 kubelet[2819]: I1212 18:19:22.030872 2819 server.go:1287] "Started kubelet" Dec 12 18:19:22.044625 kubelet[2819]: I1212 18:19:22.044328 2819 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 18:19:22.049431 kubelet[2819]: I1212 18:19:22.048929 2819 server.go:479] "Adding debug handlers to kubelet server" Dec 12 18:19:22.053921 kubelet[2819]: I1212 18:19:22.053815 2819 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 18:19:22.073173 kubelet[2819]: I1212 18:19:22.073056 2819 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 18:19:22.078103 kubelet[2819]: I1212 18:19:22.077242 2819 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 18:19:22.081788 kubelet[2819]: I1212 18:19:22.080667 2819 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 18:19:22.084066 kubelet[2819]: I1212 18:19:22.084017 2819 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 12 18:19:22.087851 kubelet[2819]: E1212 18:19:22.087623 2819 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-f-8be9c60ab1\" not found" Dec 12 18:19:22.095616 kubelet[2819]: I1212 18:19:22.095584 2819 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 12 18:19:22.101718 kubelet[2819]: I1212 18:19:22.101680 2819 factory.go:221] Registration of the systemd container factory successfully Dec 12 18:19:22.102094 kubelet[2819]: I1212 18:19:22.101972 2819 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 18:19:22.108677 kubelet[2819]: I1212 18:19:22.108615 2819 reconciler.go:26] "Reconciler: start to sync state" Dec 12 18:19:22.109465 kubelet[2819]: E1212 18:19:22.109046 2819 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 18:19:22.114110 kubelet[2819]: I1212 18:19:22.112708 2819 factory.go:221] Registration of the containerd container factory successfully Dec 12 18:19:22.122461 kubelet[2819]: I1212 18:19:22.121698 2819 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 12 18:19:22.124664 kubelet[2819]: I1212 18:19:22.124454 2819 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 12 18:19:22.124880 kubelet[2819]: I1212 18:19:22.124863 2819 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 12 18:19:22.125010 kubelet[2819]: I1212 18:19:22.124999 2819 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 18:19:22.125123 kubelet[2819]: I1212 18:19:22.125111 2819 kubelet.go:2382] "Starting kubelet main sync loop" Dec 12 18:19:22.126603 kubelet[2819]: E1212 18:19:22.125297 2819 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 18:19:22.222352 kubelet[2819]: I1212 18:19:22.221570 2819 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 18:19:22.222352 kubelet[2819]: I1212 18:19:22.221594 2819 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 18:19:22.222352 kubelet[2819]: I1212 18:19:22.221637 2819 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:19:22.222352 kubelet[2819]: I1212 18:19:22.221919 2819 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 12 18:19:22.222352 kubelet[2819]: I1212 18:19:22.221949 2819 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 12 18:19:22.222352 kubelet[2819]: I1212 18:19:22.221979 2819 policy_none.go:49] "None policy: Start" Dec 12 18:19:22.222352 kubelet[2819]: I1212 18:19:22.221995 2819 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 12 18:19:22.222352 kubelet[2819]: I1212 18:19:22.222009 2819 state_mem.go:35] "Initializing new in-memory state store" Dec 12 18:19:22.222352 kubelet[2819]: I1212 18:19:22.222197 2819 state_mem.go:75] "Updated machine memory state" Dec 12 18:19:22.226363 kubelet[2819]: E1212 18:19:22.225781 2819 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 12 18:19:22.230283 kubelet[2819]: I1212 18:19:22.229134 2819 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 12 18:19:22.231049 kubelet[2819]: I1212 18:19:22.230912 2819 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 18:19:22.231049 kubelet[2819]: I1212 18:19:22.230938 2819 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 18:19:22.238528 kubelet[2819]: I1212 18:19:22.238467 2819 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 18:19:22.245661 kubelet[2819]: E1212 18:19:22.245563 2819 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 18:19:22.346843 kubelet[2819]: I1212 18:19:22.346733 2819 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:19:22.362500 kubelet[2819]: I1212 18:19:22.362183 2819 kubelet_node_status.go:124] "Node was previously registered" node="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:19:22.362500 kubelet[2819]: I1212 18:19:22.362292 2819 kubelet_node_status.go:78] "Successfully registered node" node="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:19:22.428653 kubelet[2819]: I1212 18:19:22.427732 2819 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4515.1.0-f-8be9c60ab1" Dec 12 18:19:22.430566 kubelet[2819]: I1212 18:19:22.430119 2819 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4515.1.0-f-8be9c60ab1" Dec 12 18:19:22.431711 kubelet[2819]: I1212 18:19:22.431521 2819 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4515.1.0-f-8be9c60ab1" Dec 12 18:19:22.439981 kubelet[2819]: W1212 18:19:22.439538 2819 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 12 18:19:22.439981 kubelet[2819]: E1212 18:19:22.439648 2819 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4515.1.0-f-8be9c60ab1\" already exists" pod="kube-system/kube-apiserver-ci-4515.1.0-f-8be9c60ab1" Dec 12 18:19:22.442582 kubelet[2819]: W1212 18:19:22.442414 2819 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 12 18:19:22.442582 kubelet[2819]: E1212 18:19:22.442529 2819 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4515.1.0-f-8be9c60ab1\" already exists" pod="kube-system/kube-scheduler-ci-4515.1.0-f-8be9c60ab1" Dec 12 18:19:22.443163 kubelet[2819]: W1212 18:19:22.442878 2819 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 12 18:19:22.516243 kubelet[2819]: I1212 18:19:22.515659 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/63cd94081228352e06afffa711d03c88-ca-certs\") pod \"kube-controller-manager-ci-4515.1.0-f-8be9c60ab1\" (UID: \"63cd94081228352e06afffa711d03c88\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-f-8be9c60ab1" Dec 12 18:19:22.516243 kubelet[2819]: I1212 18:19:22.515772 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/63cd94081228352e06afffa711d03c88-flexvolume-dir\") pod \"kube-controller-manager-ci-4515.1.0-f-8be9c60ab1\" (UID: \"63cd94081228352e06afffa711d03c88\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-f-8be9c60ab1" Dec 12 18:19:22.516243 kubelet[2819]: I1212 18:19:22.515849 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/63cd94081228352e06afffa711d03c88-k8s-certs\") pod \"kube-controller-manager-ci-4515.1.0-f-8be9c60ab1\" (UID: \"63cd94081228352e06afffa711d03c88\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-f-8be9c60ab1" Dec 12 18:19:22.516243 kubelet[2819]: I1212 18:19:22.515912 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/63cd94081228352e06afffa711d03c88-kubeconfig\") pod \"kube-controller-manager-ci-4515.1.0-f-8be9c60ab1\" (UID: \"63cd94081228352e06afffa711d03c88\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-f-8be9c60ab1" Dec 12 18:19:22.516243 kubelet[2819]: I1212 18:19:22.515940 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9797f0dda91eed814a7e9efb94450320-kubeconfig\") pod \"kube-scheduler-ci-4515.1.0-f-8be9c60ab1\" (UID: \"9797f0dda91eed814a7e9efb94450320\") " pod="kube-system/kube-scheduler-ci-4515.1.0-f-8be9c60ab1" Dec 12 18:19:22.516837 kubelet[2819]: I1212 18:19:22.516043 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b73bd2f6f26f0527c7e5469e2fb4776-k8s-certs\") pod \"kube-apiserver-ci-4515.1.0-f-8be9c60ab1\" (UID: \"9b73bd2f6f26f0527c7e5469e2fb4776\") " pod="kube-system/kube-apiserver-ci-4515.1.0-f-8be9c60ab1" Dec 12 18:19:22.516837 kubelet[2819]: I1212 18:19:22.516079 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b73bd2f6f26f0527c7e5469e2fb4776-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4515.1.0-f-8be9c60ab1\" (UID: \"9b73bd2f6f26f0527c7e5469e2fb4776\") " pod="kube-system/kube-apiserver-ci-4515.1.0-f-8be9c60ab1" Dec 12 18:19:22.516837 kubelet[2819]: I1212 18:19:22.516148 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b73bd2f6f26f0527c7e5469e2fb4776-ca-certs\") pod \"kube-apiserver-ci-4515.1.0-f-8be9c60ab1\" (UID: \"9b73bd2f6f26f0527c7e5469e2fb4776\") " pod="kube-system/kube-apiserver-ci-4515.1.0-f-8be9c60ab1" Dec 12 18:19:22.516837 kubelet[2819]: I1212 18:19:22.516180 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/63cd94081228352e06afffa711d03c88-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4515.1.0-f-8be9c60ab1\" (UID: \"63cd94081228352e06afffa711d03c88\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-f-8be9c60ab1" Dec 12 18:19:22.740771 kubelet[2819]: E1212 18:19:22.740606 2819 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 12 18:19:22.743971 kubelet[2819]: E1212 18:19:22.743745 2819 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 12 18:19:22.744632 kubelet[2819]: E1212 18:19:22.743979 2819 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 12 18:19:23.034496 kubelet[2819]: I1212 18:19:23.034421 2819 apiserver.go:52] "Watching apiserver" Dec 12 18:19:23.096818 kubelet[2819]: I1212 18:19:23.096768 2819 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 12 18:19:23.175510 kubelet[2819]: I1212 18:19:23.172877 2819 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4515.1.0-f-8be9c60ab1" Dec 12 18:19:23.175510 kubelet[2819]: I1212 18:19:23.173349 2819 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4515.1.0-f-8be9c60ab1" Dec 12 18:19:23.175992 kubelet[2819]: E1212 18:19:23.175959 2819 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 12 18:19:23.183693 kubelet[2819]: W1212 18:19:23.183657 2819 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 12 18:19:23.184283 kubelet[2819]: E1212 18:19:23.183948 2819 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4515.1.0-f-8be9c60ab1\" already exists" pod="kube-system/kube-controller-manager-ci-4515.1.0-f-8be9c60ab1" Dec 12 18:19:23.184283 kubelet[2819]: E1212 18:19:23.184176 2819 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 12 18:19:23.187539 kubelet[2819]: W1212 18:19:23.187490 2819 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 12 18:19:23.188454 kubelet[2819]: E1212 18:19:23.187826 2819 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4515.1.0-f-8be9c60ab1\" already exists" pod="kube-system/kube-apiserver-ci-4515.1.0-f-8be9c60ab1" Dec 12 18:19:23.188454 kubelet[2819]: E1212 18:19:23.188018 2819 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 12 18:19:23.214612 kubelet[2819]: I1212 18:19:23.214469 2819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4515.1.0-f-8be9c60ab1" podStartSLOduration=4.214391872 podStartE2EDuration="4.214391872s" podCreationTimestamp="2025-12-12 18:19:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:19:23.213201571 +0000 UTC m=+1.325169230" watchObservedRunningTime="2025-12-12 18:19:23.214391872 +0000 UTC m=+1.326359515" Dec 12 18:19:23.235592 kubelet[2819]: I1212 18:19:23.235521 2819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4515.1.0-f-8be9c60ab1" podStartSLOduration=1.235455236 podStartE2EDuration="1.235455236s" podCreationTimestamp="2025-12-12 18:19:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:19:23.233022524 +0000 UTC m=+1.344990186" watchObservedRunningTime="2025-12-12 18:19:23.235455236 +0000 UTC m=+1.347422899" Dec 12 18:19:23.723565 update_engine[1593]: I20251212 18:19:23.722839 1593 update_attempter.cc:509] Updating boot flags... Dec 12 18:19:24.174569 kubelet[2819]: E1212 18:19:24.174531 2819 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 12 18:19:24.175522 kubelet[2819]: E1212 18:19:24.175049 2819 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 12 18:19:24.175522 kubelet[2819]: E1212 18:19:24.175406 2819 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 12 18:19:25.177280 kubelet[2819]: E1212 18:19:25.177243 2819 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 12 18:19:26.044115 kubelet[2819]: E1212 18:19:26.043766 2819 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 12 18:19:26.069663 kubelet[2819]: I1212 18:19:26.069224 2819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4515.1.0-f-8be9c60ab1" podStartSLOduration=7.06920469 podStartE2EDuration="7.06920469s" podCreationTimestamp="2025-12-12 18:19:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:19:23.247502944 +0000 UTC m=+1.359470599" watchObservedRunningTime="2025-12-12 18:19:26.06920469 +0000 UTC m=+4.181172350" Dec 12 18:19:26.179226 kubelet[2819]: E1212 18:19:26.179060 2819 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 12 18:19:26.182630 kubelet[2819]: E1212 18:19:26.180409 2819 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 12 18:19:27.151814 kubelet[2819]: I1212 18:19:27.151774 2819 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 12 18:19:27.152880 containerd[1616]: time="2025-12-12T18:19:27.152358068Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 12 18:19:27.153719 kubelet[2819]: I1212 18:19:27.153692 2819 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 12 18:19:27.185301 kubelet[2819]: E1212 18:19:27.184615 2819 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 12 18:19:27.816078 systemd[1]: Created slice kubepods-besteffort-podb74d23e5_9fe0_42a2_825d_d79a12ebbe66.slice - libcontainer container kubepods-besteffort-podb74d23e5_9fe0_42a2_825d_d79a12ebbe66.slice. Dec 12 18:19:27.860734 kubelet[2819]: I1212 18:19:27.859938 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96wkg\" (UniqueName: \"kubernetes.io/projected/b74d23e5-9fe0-42a2-825d-d79a12ebbe66-kube-api-access-96wkg\") pod \"kube-proxy-mc5rr\" (UID: \"b74d23e5-9fe0-42a2-825d-d79a12ebbe66\") " pod="kube-system/kube-proxy-mc5rr" Dec 12 18:19:27.860734 kubelet[2819]: I1212 18:19:27.860598 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b74d23e5-9fe0-42a2-825d-d79a12ebbe66-xtables-lock\") pod \"kube-proxy-mc5rr\" (UID: \"b74d23e5-9fe0-42a2-825d-d79a12ebbe66\") " pod="kube-system/kube-proxy-mc5rr" Dec 12 18:19:27.860734 kubelet[2819]: I1212 18:19:27.860641 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b74d23e5-9fe0-42a2-825d-d79a12ebbe66-lib-modules\") pod \"kube-proxy-mc5rr\" (UID: \"b74d23e5-9fe0-42a2-825d-d79a12ebbe66\") " pod="kube-system/kube-proxy-mc5rr" Dec 12 18:19:27.860734 kubelet[2819]: I1212 18:19:27.860669 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b74d23e5-9fe0-42a2-825d-d79a12ebbe66-kube-proxy\") pod \"kube-proxy-mc5rr\" (UID: \"b74d23e5-9fe0-42a2-825d-d79a12ebbe66\") " pod="kube-system/kube-proxy-mc5rr" Dec 12 18:19:28.127976 kubelet[2819]: E1212 18:19:28.126722 2819 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 12 18:19:28.129537 containerd[1616]: time="2025-12-12T18:19:28.129490820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mc5rr,Uid:b74d23e5-9fe0-42a2-825d-d79a12ebbe66,Namespace:kube-system,Attempt:0,}" Dec 12 18:19:28.171998 containerd[1616]: time="2025-12-12T18:19:28.171640371Z" level=info msg="connecting to shim 90e7c387fdfc87e92e67364392cee049e160bf69d2108140c5b4f43a3a9f6877" address="unix:///run/containerd/s/5bb53e949324998281419eae5e412abf4d68a39ab691d615fb2fd26a87047662" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:19:28.219414 systemd[1]: Started cri-containerd-90e7c387fdfc87e92e67364392cee049e160bf69d2108140c5b4f43a3a9f6877.scope - libcontainer container 90e7c387fdfc87e92e67364392cee049e160bf69d2108140c5b4f43a3a9f6877. Dec 12 18:19:28.236000 audit: BPF prog-id=131 op=LOAD Dec 12 18:19:28.237557 kernel: kauditd_printk_skb: 32 callbacks suppressed Dec 12 18:19:28.237874 kernel: audit: type=1334 audit(1765563568.236:436): prog-id=131 op=LOAD Dec 12 18:19:28.240000 audit: BPF prog-id=132 op=LOAD Dec 12 18:19:28.240000 audit[2899]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=2888 pid=2899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:28.244178 kernel: audit: type=1334 audit(1765563568.240:437): prog-id=132 op=LOAD Dec 12 18:19:28.244266 kernel: audit: type=1300 audit(1765563568.240:437): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=2888 pid=2899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:28.240000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930653763333837666466633837653932653637333634333932636565 Dec 12 18:19:28.250784 kernel: audit: type=1327 audit(1765563568.240:437): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930653763333837666466633837653932653637333634333932636565 Dec 12 18:19:28.240000 audit: BPF prog-id=132 op=UNLOAD Dec 12 18:19:28.257873 kernel: audit: type=1334 audit(1765563568.240:438): prog-id=132 op=UNLOAD Dec 12 18:19:28.240000 audit[2899]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2888 pid=2899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:28.265516 kernel: audit: type=1300 audit(1765563568.240:438): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2888 pid=2899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:28.240000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930653763333837666466633837653932653637333634333932636565 Dec 12 18:19:28.241000 audit: BPF prog-id=133 op=LOAD Dec 12 18:19:28.274929 kernel: audit: type=1327 audit(1765563568.240:438): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930653763333837666466633837653932653637333634333932636565 Dec 12 18:19:28.275160 kernel: audit: type=1334 audit(1765563568.241:439): prog-id=133 op=LOAD Dec 12 18:19:28.275228 kernel: audit: type=1300 audit(1765563568.241:439): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=2888 pid=2899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:28.241000 audit[2899]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=2888 pid=2899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:28.241000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930653763333837666466633837653932653637333634333932636565 Dec 12 18:19:28.288524 kernel: audit: type=1327 audit(1765563568.241:439): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930653763333837666466633837653932653637333634333932636565 Dec 12 18:19:28.241000 audit: BPF prog-id=134 op=LOAD Dec 12 18:19:28.241000 audit[2899]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=2888 pid=2899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:28.241000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930653763333837666466633837653932653637333634333932636565 Dec 12 18:19:28.241000 audit: BPF prog-id=134 op=UNLOAD Dec 12 18:19:28.241000 audit[2899]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2888 pid=2899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:28.241000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930653763333837666466633837653932653637333634333932636565 Dec 12 18:19:28.241000 audit: BPF prog-id=133 op=UNLOAD Dec 12 18:19:28.241000 audit[2899]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2888 pid=2899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:28.241000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930653763333837666466633837653932653637333634333932636565 Dec 12 18:19:28.241000 audit: BPF prog-id=135 op=LOAD Dec 12 18:19:28.241000 audit[2899]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=2888 pid=2899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:28.241000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930653763333837666466633837653932653637333634333932636565 Dec 12 18:19:28.305947 containerd[1616]: time="2025-12-12T18:19:28.305853487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mc5rr,Uid:b74d23e5-9fe0-42a2-825d-d79a12ebbe66,Namespace:kube-system,Attempt:0,} returns sandbox id \"90e7c387fdfc87e92e67364392cee049e160bf69d2108140c5b4f43a3a9f6877\"" Dec 12 18:19:28.308924 kubelet[2819]: E1212 18:19:28.308872 2819 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 12 18:19:28.320514 containerd[1616]: time="2025-12-12T18:19:28.319871620Z" level=info msg="CreateContainer within sandbox \"90e7c387fdfc87e92e67364392cee049e160bf69d2108140c5b4f43a3a9f6877\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 12 18:19:28.347065 containerd[1616]: time="2025-12-12T18:19:28.347005787Z" level=info msg="Container b48dabc56499bfbf38825ffa5812ecb524643a2bc405ca283a8e7878166fb9c8: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:19:28.356141 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount214435704.mount: Deactivated successfully. Dec 12 18:19:28.391946 containerd[1616]: time="2025-12-12T18:19:28.391701985Z" level=info msg="CreateContainer within sandbox \"90e7c387fdfc87e92e67364392cee049e160bf69d2108140c5b4f43a3a9f6877\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b48dabc56499bfbf38825ffa5812ecb524643a2bc405ca283a8e7878166fb9c8\"" Dec 12 18:19:28.396297 containerd[1616]: time="2025-12-12T18:19:28.394776215Z" level=info msg="StartContainer for \"b48dabc56499bfbf38825ffa5812ecb524643a2bc405ca283a8e7878166fb9c8\"" Dec 12 18:19:28.401645 containerd[1616]: time="2025-12-12T18:19:28.401583371Z" level=info msg="connecting to shim b48dabc56499bfbf38825ffa5812ecb524643a2bc405ca283a8e7878166fb9c8" address="unix:///run/containerd/s/5bb53e949324998281419eae5e412abf4d68a39ab691d615fb2fd26a87047662" protocol=ttrpc version=3 Dec 12 18:19:28.440976 systemd[1]: Created slice kubepods-besteffort-pod8d8914b7_0676_40de_a86b_a353d3ab5fed.slice - libcontainer container kubepods-besteffort-pod8d8914b7_0676_40de_a86b_a353d3ab5fed.slice. Dec 12 18:19:28.458149 systemd[1]: Started cri-containerd-b48dabc56499bfbf38825ffa5812ecb524643a2bc405ca283a8e7878166fb9c8.scope - libcontainer container b48dabc56499bfbf38825ffa5812ecb524643a2bc405ca283a8e7878166fb9c8. Dec 12 18:19:28.463916 kubelet[2819]: I1212 18:19:28.463848 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhdlc\" (UniqueName: \"kubernetes.io/projected/8d8914b7-0676-40de-a86b-a353d3ab5fed-kube-api-access-xhdlc\") pod \"tigera-operator-7dcd859c48-gt7qz\" (UID: \"8d8914b7-0676-40de-a86b-a353d3ab5fed\") " pod="tigera-operator/tigera-operator-7dcd859c48-gt7qz" Dec 12 18:19:28.464240 kubelet[2819]: I1212 18:19:28.464040 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8d8914b7-0676-40de-a86b-a353d3ab5fed-var-lib-calico\") pod \"tigera-operator-7dcd859c48-gt7qz\" (UID: \"8d8914b7-0676-40de-a86b-a353d3ab5fed\") " pod="tigera-operator/tigera-operator-7dcd859c48-gt7qz" Dec 12 18:19:28.530000 audit: BPF prog-id=136 op=LOAD Dec 12 18:19:28.530000 audit[2926]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2888 pid=2926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:28.530000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234386461626335363439396266626633383832356666613538313265 Dec 12 18:19:28.531000 audit: BPF prog-id=137 op=LOAD Dec 12 18:19:28.531000 audit[2926]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2888 pid=2926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:28.531000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234386461626335363439396266626633383832356666613538313265 Dec 12 18:19:28.531000 audit: BPF prog-id=137 op=UNLOAD Dec 12 18:19:28.531000 audit[2926]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2888 pid=2926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:28.531000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234386461626335363439396266626633383832356666613538313265 Dec 12 18:19:28.531000 audit: BPF prog-id=136 op=UNLOAD Dec 12 18:19:28.531000 audit[2926]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2888 pid=2926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:28.531000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234386461626335363439396266626633383832356666613538313265 Dec 12 18:19:28.531000 audit: BPF prog-id=138 op=LOAD Dec 12 18:19:28.531000 audit[2926]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2888 pid=2926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:28.531000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234386461626335363439396266626633383832356666613538313265 Dec 12 18:19:28.562524 containerd[1616]: time="2025-12-12T18:19:28.562411602Z" level=info msg="StartContainer for \"b48dabc56499bfbf38825ffa5812ecb524643a2bc405ca283a8e7878166fb9c8\" returns successfully" Dec 12 18:19:28.751772 containerd[1616]: time="2025-12-12T18:19:28.751705372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-gt7qz,Uid:8d8914b7-0676-40de-a86b-a353d3ab5fed,Namespace:tigera-operator,Attempt:0,}" Dec 12 18:19:28.762268 kubelet[2819]: E1212 18:19:28.762196 2819 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 12 18:19:28.792471 containerd[1616]: time="2025-12-12T18:19:28.791688851Z" level=info msg="connecting to shim d1ec71a551b4bde97e76c3d1a3c9635042c5771588dddd6e416ada76a3d5ea0b" address="unix:///run/containerd/s/0ebc68db195f45c2dd27374349b526bb8c08523444a4740ca8fb72ee44b6665b" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:19:28.826876 systemd[1]: Started cri-containerd-d1ec71a551b4bde97e76c3d1a3c9635042c5771588dddd6e416ada76a3d5ea0b.scope - libcontainer container d1ec71a551b4bde97e76c3d1a3c9635042c5771588dddd6e416ada76a3d5ea0b. Dec 12 18:19:28.856000 audit: BPF prog-id=139 op=LOAD Dec 12 18:19:28.858000 audit: BPF prog-id=140 op=LOAD Dec 12 18:19:28.858000 audit[2983]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=2972 pid=2983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:28.858000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431656337316135353162346264653937653736633364316133633936 Dec 12 18:19:28.858000 audit: BPF prog-id=140 op=UNLOAD Dec 12 18:19:28.858000 audit[2983]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2972 pid=2983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:28.858000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431656337316135353162346264653937653736633364316133633936 Dec 12 18:19:28.858000 audit: BPF prog-id=141 op=LOAD Dec 12 18:19:28.858000 audit[2983]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=2972 pid=2983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:28.858000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431656337316135353162346264653937653736633364316133633936 Dec 12 18:19:28.858000 audit: BPF prog-id=142 op=LOAD Dec 12 18:19:28.858000 audit[2983]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=2972 pid=2983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:28.858000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431656337316135353162346264653937653736633364316133633936 Dec 12 18:19:28.859000 audit: BPF prog-id=142 op=UNLOAD Dec 12 18:19:28.859000 audit[2983]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2972 pid=2983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:28.859000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431656337316135353162346264653937653736633364316133633936 Dec 12 18:19:28.859000 audit: BPF prog-id=141 op=UNLOAD Dec 12 18:19:28.859000 audit[2983]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2972 pid=2983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:28.859000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431656337316135353162346264653937653736633364316133633936 Dec 12 18:19:28.859000 audit: BPF prog-id=143 op=LOAD Dec 12 18:19:28.859000 audit[2983]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=2972 pid=2983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:28.859000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431656337316135353162346264653937653736633364316133633936 Dec 12 18:19:28.918252 containerd[1616]: time="2025-12-12T18:19:28.918191776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-gt7qz,Uid:8d8914b7-0676-40de-a86b-a353d3ab5fed,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"d1ec71a551b4bde97e76c3d1a3c9635042c5771588dddd6e416ada76a3d5ea0b\"" Dec 12 18:19:28.922208 containerd[1616]: time="2025-12-12T18:19:28.922134779Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 12 18:19:28.926371 systemd-resolved[1288]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Dec 12 18:19:28.955000 audit[3033]: NETFILTER_CFG table=mangle:54 family=10 entries=1 op=nft_register_chain pid=3033 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:19:28.955000 audit[3033]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffeb5a93570 a2=0 a3=7ffeb5a9355c items=0 ppid=2939 pid=3033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:28.955000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 12 18:19:28.956000 audit[3034]: NETFILTER_CFG table=mangle:55 family=2 entries=1 op=nft_register_chain pid=3034 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:19:28.956000 audit[3034]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd50b293a0 a2=0 a3=7ffd50b2938c items=0 ppid=2939 pid=3034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:28.956000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 12 18:19:28.959000 audit[3038]: NETFILTER_CFG table=nat:56 family=2 entries=1 op=nft_register_chain pid=3038 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:19:28.959000 audit[3038]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffbb61d150 a2=0 a3=7fffbb61d13c items=0 ppid=2939 pid=3038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:28.959000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 12 18:19:28.960000 audit[3039]: NETFILTER_CFG table=nat:57 family=10 entries=1 op=nft_register_chain pid=3039 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:19:28.960000 audit[3039]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe190acc90 a2=0 a3=7ffe190acc7c items=0 ppid=2939 pid=3039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:28.960000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 12 18:19:28.963000 audit[3040]: NETFILTER_CFG table=filter:58 family=2 entries=1 op=nft_register_chain pid=3040 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:19:28.963000 audit[3040]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe5e901d00 a2=0 a3=7ffe5e901cec items=0 ppid=2939 pid=3040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:28.963000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 12 18:19:28.963000 audit[3041]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=3041 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:19:28.963000 audit[3041]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc5f1d91f0 a2=0 a3=7ffc5f1d91dc items=0 ppid=2939 pid=3041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:28.963000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 12 18:19:29.069000 audit[3042]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3042 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:19:29.069000 audit[3042]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffecfe13b40 a2=0 a3=7ffecfe13b2c items=0 ppid=2939 pid=3042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:29.069000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 12 18:19:29.078000 audit[3044]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3044 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:19:29.078000 audit[3044]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff70bd4140 a2=0 a3=7fff70bd412c items=0 ppid=2939 pid=3044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:29.078000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Dec 12 18:19:29.085000 audit[3047]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3047 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:19:29.085000 audit[3047]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc94b88380 a2=0 a3=7ffc94b8836c items=0 ppid=2939 pid=3047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:29.085000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Dec 12 18:19:29.087000 audit[3048]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3048 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:19:29.087000 audit[3048]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff95103860 a2=0 a3=7fff9510384c items=0 ppid=2939 pid=3048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:29.087000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 12 18:19:29.092000 audit[3050]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3050 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:19:29.092000 audit[3050]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff8a0f9680 a2=0 a3=7fff8a0f966c items=0 ppid=2939 pid=3050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:29.092000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 12 18:19:29.094000 audit[3051]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3051 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:19:29.094000 audit[3051]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffecf9cc360 a2=0 a3=7ffecf9cc34c items=0 ppid=2939 pid=3051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:29.094000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 12 18:19:29.099000 audit[3053]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3053 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:19:29.099000 audit[3053]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd94fea870 a2=0 a3=7ffd94fea85c items=0 ppid=2939 pid=3053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:29.099000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 12 18:19:29.106000 audit[3056]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3056 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:19:29.106000 audit[3056]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe3772b4f0 a2=0 a3=7ffe3772b4dc items=0 ppid=2939 pid=3056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:29.106000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Dec 12 18:19:29.108000 audit[3057]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3057 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:19:29.108000 audit[3057]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd0f371210 a2=0 a3=7ffd0f3711fc items=0 ppid=2939 pid=3057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:29.108000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 12 18:19:29.114000 audit[3059]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3059 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:19:29.114000 audit[3059]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc2fe76ad0 a2=0 a3=7ffc2fe76abc items=0 ppid=2939 pid=3059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:29.114000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 12 18:19:29.116000 audit[3060]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3060 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:19:29.116000 audit[3060]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc8ee70250 a2=0 a3=7ffc8ee7023c items=0 ppid=2939 pid=3060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:29.116000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 12 18:19:29.121000 audit[3062]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3062 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:19:29.121000 audit[3062]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc1011bbc0 a2=0 a3=7ffc1011bbac items=0 ppid=2939 pid=3062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:29.121000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 12 18:19:29.131000 audit[3065]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3065 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:19:29.131000 audit[3065]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcf3c01970 a2=0 a3=7ffcf3c0195c items=0 ppid=2939 pid=3065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:29.131000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 12 18:19:29.140000 audit[3068]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3068 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:19:29.140000 audit[3068]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe98d1fab0 a2=0 a3=7ffe98d1fa9c items=0 ppid=2939 pid=3068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:29.140000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 12 18:19:29.142000 audit[3069]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3069 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:19:29.142000 audit[3069]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffddecd6240 a2=0 a3=7ffddecd622c items=0 ppid=2939 pid=3069 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:29.142000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 12 18:19:29.148000 audit[3071]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3071 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:19:29.148000 audit[3071]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7fff3bceb680 a2=0 a3=7fff3bceb66c items=0 ppid=2939 pid=3071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:29.148000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 18:19:29.154000 audit[3074]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3074 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:19:29.154000 audit[3074]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffffe0c3660 a2=0 a3=7ffffe0c364c items=0 ppid=2939 pid=3074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:29.154000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 18:19:29.157000 audit[3075]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3075 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:19:29.157000 audit[3075]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffecffb1900 a2=0 a3=7ffecffb18ec items=0 ppid=2939 pid=3075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:29.157000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 12 18:19:29.161000 audit[3077]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3077 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:19:29.161000 audit[3077]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffff7776c40 a2=0 a3=7ffff7776c2c items=0 ppid=2939 pid=3077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:29.161000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 12 18:19:29.198000 audit[3083]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3083 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:19:29.198000 audit[3083]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffdac51ed80 a2=0 a3=7ffdac51ed6c items=0 ppid=2939 pid=3083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:29.198000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:19:29.203138 kubelet[2819]: E1212 18:19:29.203056 2819 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 12 18:19:29.204086 kubelet[2819]: E1212 18:19:29.203753 2819 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 12 18:19:29.210000 audit[3083]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3083 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:19:29.210000 audit[3083]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffdac51ed80 a2=0 a3=7ffdac51ed6c items=0 ppid=2939 pid=3083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:29.210000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:19:29.220000 audit[3089]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3089 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:19:29.220000 audit[3089]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd860d5d90 a2=0 a3=7ffd860d5d7c items=0 ppid=2939 pid=3089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:29.220000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 12 18:19:29.232000 audit[3091]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3091 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:19:29.232000 audit[3091]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffe29e5f5a0 a2=0 a3=7ffe29e5f58c items=0 ppid=2939 pid=3091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:29.232000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Dec 12 18:19:29.244000 audit[3094]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3094 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:19:29.244000 audit[3094]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffec8437030 a2=0 a3=7ffec843701c items=0 ppid=2939 pid=3094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:29.244000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Dec 12 18:19:29.256000 audit[3095]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3095 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:19:29.256000 audit[3095]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcec3a7a20 a2=0 a3=7ffcec3a7a0c items=0 ppid=2939 pid=3095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:29.256000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 12 18:19:29.263000 audit[3097]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3097 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:19:29.263000 audit[3097]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc073692a0 a2=0 a3=7ffc0736928c items=0 ppid=2939 pid=3097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:29.263000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 12 18:19:29.264000 audit[3098]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3098 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:19:29.264000 audit[3098]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff90646da0 a2=0 a3=7fff90646d8c items=0 ppid=2939 pid=3098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:29.264000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 12 18:19:29.270000 audit[3100]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3100 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:19:29.270000 audit[3100]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff4f9472c0 a2=0 a3=7fff4f9472ac items=0 ppid=2939 pid=3100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:29.270000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Dec 12 18:19:29.276000 audit[3103]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3103 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:19:29.276000 audit[3103]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffeee406e70 a2=0 a3=7ffeee406e5c items=0 ppid=2939 pid=3103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:29.276000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 12 18:19:29.279000 audit[3104]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3104 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:19:29.279000 audit[3104]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffccdcf9e00 a2=0 a3=7ffccdcf9dec items=0 ppid=2939 pid=3104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:29.279000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 12 18:19:29.283000 audit[3106]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3106 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:19:29.283000 audit[3106]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc745d0aa0 a2=0 a3=7ffc745d0a8c items=0 ppid=2939 pid=3106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:29.283000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 12 18:19:29.285000 audit[3107]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3107 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:19:29.285000 audit[3107]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd13ba8780 a2=0 a3=7ffd13ba876c items=0 ppid=2939 pid=3107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:29.285000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 12 18:19:29.289000 audit[3109]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3109 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:19:29.289000 audit[3109]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffccee56b90 a2=0 a3=7ffccee56b7c items=0 ppid=2939 pid=3109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:29.289000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 12 18:19:29.296000 audit[3112]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3112 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:19:29.296000 audit[3112]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdaed7c180 a2=0 a3=7ffdaed7c16c items=0 ppid=2939 pid=3112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:29.296000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 12 18:19:29.303000 audit[3115]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3115 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:19:29.303000 audit[3115]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc0db56d80 a2=0 a3=7ffc0db56d6c items=0 ppid=2939 pid=3115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:29.303000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Dec 12 18:19:29.308000 audit[3116]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3116 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:19:29.308000 audit[3116]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc100df6b0 a2=0 a3=7ffc100df69c items=0 ppid=2939 pid=3116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:29.308000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 12 18:19:29.316000 audit[3118]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3118 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:19:29.316000 audit[3118]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffcdf24a5d0 a2=0 a3=7ffcdf24a5bc items=0 ppid=2939 pid=3118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:29.316000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 18:19:29.325000 audit[3121]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3121 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:19:29.325000 audit[3121]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff54839050 a2=0 a3=7fff5483903c items=0 ppid=2939 pid=3121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:29.325000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 18:19:29.330000 audit[3122]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3122 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:19:29.330000 audit[3122]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc1379fc80 a2=0 a3=7ffc1379fc6c items=0 ppid=2939 pid=3122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:29.330000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 12 18:19:29.335000 audit[3124]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3124 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:19:29.335000 audit[3124]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffff9066650 a2=0 a3=7ffff906663c items=0 ppid=2939 pid=3124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:29.335000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 12 18:19:29.337000 audit[3125]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3125 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:19:29.337000 audit[3125]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff51abbfc0 a2=0 a3=7fff51abbfac items=0 ppid=2939 pid=3125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:29.337000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 12 18:19:29.342000 audit[3127]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3127 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:19:29.342000 audit[3127]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd0b8ff6c0 a2=0 a3=7ffd0b8ff6ac items=0 ppid=2939 pid=3127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:29.342000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 12 18:19:29.351000 audit[3130]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3130 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:19:29.351000 audit[3130]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd1d60bad0 a2=0 a3=7ffd1d60babc items=0 ppid=2939 pid=3130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:29.351000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 12 18:19:29.357000 audit[3132]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3132 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 12 18:19:29.357000 audit[3132]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffcb91ac200 a2=0 a3=7ffcb91ac1ec items=0 ppid=2939 pid=3132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:29.357000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:19:29.357000 audit[3132]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3132 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 12 18:19:29.357000 audit[3132]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffcb91ac200 a2=0 a3=7ffcb91ac1ec items=0 ppid=2939 pid=3132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:29.357000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:19:30.669699 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3694715260.mount: Deactivated successfully. Dec 12 18:19:31.768293 containerd[1616]: time="2025-12-12T18:19:31.768215329Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:19:31.772111 containerd[1616]: time="2025-12-12T18:19:31.771743049Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Dec 12 18:19:31.773473 containerd[1616]: time="2025-12-12T18:19:31.773415460Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:19:31.777378 containerd[1616]: time="2025-12-12T18:19:31.777317922Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:19:31.779541 containerd[1616]: time="2025-12-12T18:19:31.778689134Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.856502196s" Dec 12 18:19:31.779541 containerd[1616]: time="2025-12-12T18:19:31.778745203Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Dec 12 18:19:31.784272 containerd[1616]: time="2025-12-12T18:19:31.782458231Z" level=info msg="CreateContainer within sandbox \"d1ec71a551b4bde97e76c3d1a3c9635042c5771588dddd6e416ada76a3d5ea0b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 12 18:19:31.800532 containerd[1616]: time="2025-12-12T18:19:31.797680013Z" level=info msg="Container e0ac5382ca534d54f22321e2afcd69ab2d04d68517f89126efca7377c99d3d5f: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:19:31.805939 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount949659165.mount: Deactivated successfully. Dec 12 18:19:31.812525 containerd[1616]: time="2025-12-12T18:19:31.812252084Z" level=info msg="CreateContainer within sandbox \"d1ec71a551b4bde97e76c3d1a3c9635042c5771588dddd6e416ada76a3d5ea0b\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"e0ac5382ca534d54f22321e2afcd69ab2d04d68517f89126efca7377c99d3d5f\"" Dec 12 18:19:31.816547 containerd[1616]: time="2025-12-12T18:19:31.816353208Z" level=info msg="StartContainer for \"e0ac5382ca534d54f22321e2afcd69ab2d04d68517f89126efca7377c99d3d5f\"" Dec 12 18:19:31.818548 containerd[1616]: time="2025-12-12T18:19:31.818409279Z" level=info msg="connecting to shim e0ac5382ca534d54f22321e2afcd69ab2d04d68517f89126efca7377c99d3d5f" address="unix:///run/containerd/s/0ebc68db195f45c2dd27374349b526bb8c08523444a4740ca8fb72ee44b6665b" protocol=ttrpc version=3 Dec 12 18:19:31.856853 systemd[1]: Started cri-containerd-e0ac5382ca534d54f22321e2afcd69ab2d04d68517f89126efca7377c99d3d5f.scope - libcontainer container e0ac5382ca534d54f22321e2afcd69ab2d04d68517f89126efca7377c99d3d5f. Dec 12 18:19:31.875000 audit: BPF prog-id=144 op=LOAD Dec 12 18:19:31.876000 audit: BPF prog-id=145 op=LOAD Dec 12 18:19:31.876000 audit[3142]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2972 pid=3142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:31.876000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530616335333832636135333464353466323233323165326166636436 Dec 12 18:19:31.876000 audit: BPF prog-id=145 op=UNLOAD Dec 12 18:19:31.876000 audit[3142]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2972 pid=3142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:31.876000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530616335333832636135333464353466323233323165326166636436 Dec 12 18:19:31.876000 audit: BPF prog-id=146 op=LOAD Dec 12 18:19:31.876000 audit[3142]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2972 pid=3142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:31.876000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530616335333832636135333464353466323233323165326166636436 Dec 12 18:19:31.876000 audit: BPF prog-id=147 op=LOAD Dec 12 18:19:31.876000 audit[3142]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2972 pid=3142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:31.876000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530616335333832636135333464353466323233323165326166636436 Dec 12 18:19:31.876000 audit: BPF prog-id=147 op=UNLOAD Dec 12 18:19:31.876000 audit[3142]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2972 pid=3142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:31.876000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530616335333832636135333464353466323233323165326166636436 Dec 12 18:19:31.876000 audit: BPF prog-id=146 op=UNLOAD Dec 12 18:19:31.876000 audit[3142]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2972 pid=3142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:31.876000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530616335333832636135333464353466323233323165326166636436 Dec 12 18:19:31.877000 audit: BPF prog-id=148 op=LOAD Dec 12 18:19:31.877000 audit[3142]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2972 pid=3142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:31.877000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530616335333832636135333464353466323233323165326166636436 Dec 12 18:19:31.905610 containerd[1616]: time="2025-12-12T18:19:31.905554592Z" level=info msg="StartContainer for \"e0ac5382ca534d54f22321e2afcd69ab2d04d68517f89126efca7377c99d3d5f\" returns successfully" Dec 12 18:19:32.144695 kubelet[2819]: I1212 18:19:32.144036 2819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mc5rr" podStartSLOduration=5.144010762 podStartE2EDuration="5.144010762s" podCreationTimestamp="2025-12-12 18:19:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:19:29.247360695 +0000 UTC m=+7.359328358" watchObservedRunningTime="2025-12-12 18:19:32.144010762 +0000 UTC m=+10.255978405" Dec 12 18:19:34.592219 kubelet[2819]: E1212 18:19:34.591783 2819 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 12 18:19:34.632612 kubelet[2819]: I1212 18:19:34.632527 2819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-gt7qz" podStartSLOduration=3.772843059 podStartE2EDuration="6.632500811s" podCreationTimestamp="2025-12-12 18:19:28 +0000 UTC" firstStartedPulling="2025-12-12 18:19:28.921238987 +0000 UTC m=+7.033206643" lastFinishedPulling="2025-12-12 18:19:31.78089674 +0000 UTC m=+9.892864395" observedRunningTime="2025-12-12 18:19:32.225937912 +0000 UTC m=+10.337905576" watchObservedRunningTime="2025-12-12 18:19:34.632500811 +0000 UTC m=+12.744468484" Dec 12 18:19:35.250471 kubelet[2819]: E1212 18:19:35.250437 2819 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 12 18:19:38.220739 sudo[1866]: pam_unix(sudo:session): session closed for user root Dec 12 18:19:38.228698 kernel: kauditd_printk_skb: 224 callbacks suppressed Dec 12 18:19:38.228848 kernel: audit: type=1106 audit(1765563578.220:516): pid=1866 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 18:19:38.220000 audit[1866]: USER_END pid=1866 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 18:19:38.229024 sshd[1865]: Connection closed by 147.75.109.163 port 53732 Dec 12 18:19:38.231707 sshd-session[1862]: pam_unix(sshd:session): session closed for user core Dec 12 18:19:38.220000 audit[1866]: CRED_DISP pid=1866 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 18:19:38.241218 kernel: audit: type=1104 audit(1765563578.220:517): pid=1866 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 18:19:38.245094 systemd[1]: sshd@8-64.23.253.31:22-147.75.109.163:53732.service: Deactivated successfully. Dec 12 18:19:38.251099 systemd[1]: session-9.scope: Deactivated successfully. Dec 12 18:19:38.251665 systemd[1]: session-9.scope: Consumed 5.675s CPU time, 162.5M memory peak. Dec 12 18:19:38.239000 audit[1862]: USER_END pid=1862 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:19:38.261523 kernel: audit: type=1106 audit(1765563578.239:518): pid=1862 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:19:38.263687 systemd-logind[1590]: Session 9 logged out. Waiting for processes to exit. Dec 12 18:19:38.239000 audit[1862]: CRED_DISP pid=1862 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:19:38.279859 kernel: audit: type=1104 audit(1765563578.239:519): pid=1862 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:19:38.280614 systemd-logind[1590]: Removed session 9. Dec 12 18:19:38.245000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-64.23.253.31:22-147.75.109.163:53732 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:19:38.291533 kernel: audit: type=1131 audit(1765563578.245:520): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-64.23.253.31:22-147.75.109.163:53732 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:19:39.253000 audit[3225]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3225 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:19:39.259529 kernel: audit: type=1325 audit(1765563579.253:521): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3225 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:19:39.253000 audit[3225]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffe5c8b7330 a2=0 a3=7ffe5c8b731c items=0 ppid=2939 pid=3225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:39.269522 kernel: audit: type=1300 audit(1765563579.253:521): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffe5c8b7330 a2=0 a3=7ffe5c8b731c items=0 ppid=2939 pid=3225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:39.269692 kernel: audit: type=1327 audit(1765563579.253:521): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:19:39.253000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:19:39.260000 audit[3225]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3225 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:19:39.275553 kernel: audit: type=1325 audit(1765563579.260:522): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3225 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:19:39.260000 audit[3225]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe5c8b7330 a2=0 a3=0 items=0 ppid=2939 pid=3225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:39.260000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:19:39.287529 kernel: audit: type=1300 audit(1765563579.260:522): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe5c8b7330 a2=0 a3=0 items=0 ppid=2939 pid=3225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:39.291000 audit[3227]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3227 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:19:39.291000 audit[3227]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffc48f06b20 a2=0 a3=7ffc48f06b0c items=0 ppid=2939 pid=3227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:39.291000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:19:39.299000 audit[3227]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3227 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:19:39.299000 audit[3227]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc48f06b20 a2=0 a3=0 items=0 ppid=2939 pid=3227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:39.299000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:19:44.132000 audit[3229]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3229 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:19:44.134625 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 12 18:19:44.134994 kernel: audit: type=1325 audit(1765563584.132:525): table=filter:109 family=2 entries=17 op=nft_register_rule pid=3229 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:19:44.132000 audit[3229]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffd131284c0 a2=0 a3=7ffd131284ac items=0 ppid=2939 pid=3229 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:44.132000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:19:44.154976 kernel: audit: type=1300 audit(1765563584.132:525): arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffd131284c0 a2=0 a3=7ffd131284ac items=0 ppid=2939 pid=3229 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:44.155131 kernel: audit: type=1327 audit(1765563584.132:525): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:19:44.163000 audit[3229]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3229 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:19:44.163000 audit[3229]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd131284c0 a2=0 a3=0 items=0 ppid=2939 pid=3229 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:44.171116 kernel: audit: type=1325 audit(1765563584.163:526): table=nat:110 family=2 entries=12 op=nft_register_rule pid=3229 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:19:44.171700 kernel: audit: type=1300 audit(1765563584.163:526): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd131284c0 a2=0 a3=0 items=0 ppid=2939 pid=3229 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:44.163000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:19:44.182534 kernel: audit: type=1327 audit(1765563584.163:526): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:19:44.199000 audit[3231]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3231 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:19:44.199000 audit[3231]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffd36749b60 a2=0 a3=7ffd36749b4c items=0 ppid=2939 pid=3231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:44.207615 kernel: audit: type=1325 audit(1765563584.199:527): table=filter:111 family=2 entries=18 op=nft_register_rule pid=3231 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:19:44.207785 kernel: audit: type=1300 audit(1765563584.199:527): arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffd36749b60 a2=0 a3=7ffd36749b4c items=0 ppid=2939 pid=3231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:44.199000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:19:44.219533 kernel: audit: type=1327 audit(1765563584.199:527): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:19:44.214000 audit[3231]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3231 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:19:44.214000 audit[3231]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd36749b60 a2=0 a3=0 items=0 ppid=2939 pid=3231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:44.224791 kernel: audit: type=1325 audit(1765563584.214:528): table=nat:112 family=2 entries=12 op=nft_register_rule pid=3231 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:19:44.214000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:19:45.302000 audit[3234]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3234 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:19:45.302000 audit[3234]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffdb1436240 a2=0 a3=7ffdb143622c items=0 ppid=2939 pid=3234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:45.302000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:19:45.307000 audit[3234]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3234 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:19:45.307000 audit[3234]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdb1436240 a2=0 a3=0 items=0 ppid=2939 pid=3234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:45.307000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:19:46.699000 audit[3236]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3236 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:19:46.699000 audit[3236]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fff4da5df10 a2=0 a3=7fff4da5defc items=0 ppid=2939 pid=3236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:46.699000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:19:46.708000 audit[3236]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3236 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:19:46.708000 audit[3236]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff4da5df10 a2=0 a3=0 items=0 ppid=2939 pid=3236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:46.708000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:19:46.744167 systemd[1]: Created slice kubepods-besteffort-podb2a9f983_dd40_4a5f_a699_2e3074e9afd2.slice - libcontainer container kubepods-besteffort-podb2a9f983_dd40_4a5f_a699_2e3074e9afd2.slice. Dec 12 18:19:46.795513 kubelet[2819]: I1212 18:19:46.795358 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48vbq\" (UniqueName: \"kubernetes.io/projected/b2a9f983-dd40-4a5f-a699-2e3074e9afd2-kube-api-access-48vbq\") pod \"calico-typha-7455d79c87-7h66h\" (UID: \"b2a9f983-dd40-4a5f-a699-2e3074e9afd2\") " pod="calico-system/calico-typha-7455d79c87-7h66h" Dec 12 18:19:46.796824 kubelet[2819]: I1212 18:19:46.796532 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b2a9f983-dd40-4a5f-a699-2e3074e9afd2-tigera-ca-bundle\") pod \"calico-typha-7455d79c87-7h66h\" (UID: \"b2a9f983-dd40-4a5f-a699-2e3074e9afd2\") " pod="calico-system/calico-typha-7455d79c87-7h66h" Dec 12 18:19:46.796824 kubelet[2819]: I1212 18:19:46.796633 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b2a9f983-dd40-4a5f-a699-2e3074e9afd2-typha-certs\") pod \"calico-typha-7455d79c87-7h66h\" (UID: \"b2a9f983-dd40-4a5f-a699-2e3074e9afd2\") " pod="calico-system/calico-typha-7455d79c87-7h66h" Dec 12 18:19:46.957946 systemd[1]: Created slice kubepods-besteffort-podf8b9549a_db0a_4cad_b113_366f006b8d72.slice - libcontainer container kubepods-besteffort-podf8b9549a_db0a_4cad_b113_366f006b8d72.slice. Dec 12 18:19:46.998670 kubelet[2819]: I1212 18:19:46.998022 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f8b9549a-db0a-4cad-b113-366f006b8d72-policysync\") pod \"calico-node-nnwcp\" (UID: \"f8b9549a-db0a-4cad-b113-366f006b8d72\") " pod="calico-system/calico-node-nnwcp" Dec 12 18:19:46.998670 kubelet[2819]: I1212 18:19:46.998084 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f8b9549a-db0a-4cad-b113-366f006b8d72-node-certs\") pod \"calico-node-nnwcp\" (UID: \"f8b9549a-db0a-4cad-b113-366f006b8d72\") " pod="calico-system/calico-node-nnwcp" Dec 12 18:19:46.998670 kubelet[2819]: I1212 18:19:46.998116 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f8b9549a-db0a-4cad-b113-366f006b8d72-cni-bin-dir\") pod \"calico-node-nnwcp\" (UID: \"f8b9549a-db0a-4cad-b113-366f006b8d72\") " pod="calico-system/calico-node-nnwcp" Dec 12 18:19:46.998670 kubelet[2819]: I1212 18:19:46.998150 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f8b9549a-db0a-4cad-b113-366f006b8d72-cni-log-dir\") pod \"calico-node-nnwcp\" (UID: \"f8b9549a-db0a-4cad-b113-366f006b8d72\") " pod="calico-system/calico-node-nnwcp" Dec 12 18:19:46.998670 kubelet[2819]: I1212 18:19:46.998177 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f8b9549a-db0a-4cad-b113-366f006b8d72-cni-net-dir\") pod \"calico-node-nnwcp\" (UID: \"f8b9549a-db0a-4cad-b113-366f006b8d72\") " pod="calico-system/calico-node-nnwcp" Dec 12 18:19:46.999146 kubelet[2819]: I1212 18:19:46.998199 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f8b9549a-db0a-4cad-b113-366f006b8d72-tigera-ca-bundle\") pod \"calico-node-nnwcp\" (UID: \"f8b9549a-db0a-4cad-b113-366f006b8d72\") " pod="calico-system/calico-node-nnwcp" Dec 12 18:19:46.999146 kubelet[2819]: I1212 18:19:46.998232 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f8b9549a-db0a-4cad-b113-366f006b8d72-flexvol-driver-host\") pod \"calico-node-nnwcp\" (UID: \"f8b9549a-db0a-4cad-b113-366f006b8d72\") " pod="calico-system/calico-node-nnwcp" Dec 12 18:19:46.999146 kubelet[2819]: I1212 18:19:46.998268 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f8b9549a-db0a-4cad-b113-366f006b8d72-lib-modules\") pod \"calico-node-nnwcp\" (UID: \"f8b9549a-db0a-4cad-b113-366f006b8d72\") " pod="calico-system/calico-node-nnwcp" Dec 12 18:19:46.999146 kubelet[2819]: I1212 18:19:46.998459 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lk7c\" (UniqueName: \"kubernetes.io/projected/f8b9549a-db0a-4cad-b113-366f006b8d72-kube-api-access-9lk7c\") pod \"calico-node-nnwcp\" (UID: \"f8b9549a-db0a-4cad-b113-366f006b8d72\") " pod="calico-system/calico-node-nnwcp" Dec 12 18:19:46.999146 kubelet[2819]: I1212 18:19:46.998718 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f8b9549a-db0a-4cad-b113-366f006b8d72-var-lib-calico\") pod \"calico-node-nnwcp\" (UID: \"f8b9549a-db0a-4cad-b113-366f006b8d72\") " pod="calico-system/calico-node-nnwcp" Dec 12 18:19:47.002013 kubelet[2819]: I1212 18:19:46.998885 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f8b9549a-db0a-4cad-b113-366f006b8d72-var-run-calico\") pod \"calico-node-nnwcp\" (UID: \"f8b9549a-db0a-4cad-b113-366f006b8d72\") " pod="calico-system/calico-node-nnwcp" Dec 12 18:19:47.002013 kubelet[2819]: I1212 18:19:46.998941 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f8b9549a-db0a-4cad-b113-366f006b8d72-xtables-lock\") pod \"calico-node-nnwcp\" (UID: \"f8b9549a-db0a-4cad-b113-366f006b8d72\") " pod="calico-system/calico-node-nnwcp" Dec 12 18:19:47.024977 kubelet[2819]: E1212 18:19:47.024786 2819 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5kbmx" podUID="3530bcd5-7985-42ba-8587-569180a87a41" Dec 12 18:19:47.051870 kubelet[2819]: E1212 18:19:47.051815 2819 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 12 18:19:47.054856 containerd[1616]: time="2025-12-12T18:19:47.054791298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7455d79c87-7h66h,Uid:b2a9f983-dd40-4a5f-a699-2e3074e9afd2,Namespace:calico-system,Attempt:0,}" Dec 12 18:19:47.099670 kubelet[2819]: I1212 18:19:47.099604 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/3530bcd5-7985-42ba-8587-569180a87a41-varrun\") pod \"csi-node-driver-5kbmx\" (UID: \"3530bcd5-7985-42ba-8587-569180a87a41\") " pod="calico-system/csi-node-driver-5kbmx" Dec 12 18:19:47.099856 kubelet[2819]: I1212 18:19:47.099710 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3530bcd5-7985-42ba-8587-569180a87a41-kubelet-dir\") pod \"csi-node-driver-5kbmx\" (UID: \"3530bcd5-7985-42ba-8587-569180a87a41\") " pod="calico-system/csi-node-driver-5kbmx" Dec 12 18:19:47.099856 kubelet[2819]: I1212 18:19:47.099813 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3530bcd5-7985-42ba-8587-569180a87a41-registration-dir\") pod \"csi-node-driver-5kbmx\" (UID: \"3530bcd5-7985-42ba-8587-569180a87a41\") " pod="calico-system/csi-node-driver-5kbmx" Dec 12 18:19:47.099966 kubelet[2819]: I1212 18:19:47.099858 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmd22\" (UniqueName: \"kubernetes.io/projected/3530bcd5-7985-42ba-8587-569180a87a41-kube-api-access-zmd22\") pod \"csi-node-driver-5kbmx\" (UID: \"3530bcd5-7985-42ba-8587-569180a87a41\") " pod="calico-system/csi-node-driver-5kbmx" Dec 12 18:19:47.099966 kubelet[2819]: I1212 18:19:47.099888 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3530bcd5-7985-42ba-8587-569180a87a41-socket-dir\") pod \"csi-node-driver-5kbmx\" (UID: \"3530bcd5-7985-42ba-8587-569180a87a41\") " pod="calico-system/csi-node-driver-5kbmx" Dec 12 18:19:47.136426 containerd[1616]: time="2025-12-12T18:19:47.135970373Z" level=info msg="connecting to shim 4dc99cb0aef2629fa7827fa2b5eda535479fae8d01bf322260e1f6e9944929c1" address="unix:///run/containerd/s/a0b7b673dd1e3cec478a35c637083f886f5dc2ba1295050de98b970796aeea50" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:19:47.143626 kubelet[2819]: E1212 18:19:47.143568 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:19:47.143626 kubelet[2819]: W1212 18:19:47.143613 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:19:47.145988 kubelet[2819]: E1212 18:19:47.145916 2819 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:19:47.146574 kubelet[2819]: E1212 18:19:47.146405 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:19:47.146574 kubelet[2819]: W1212 18:19:47.146428 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:19:47.146574 kubelet[2819]: E1212 18:19:47.146453 2819 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:19:47.171425 kubelet[2819]: E1212 18:19:47.170621 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:19:47.171425 kubelet[2819]: W1212 18:19:47.170655 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:19:47.177501 kubelet[2819]: E1212 18:19:47.173235 2819 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:19:47.201206 kubelet[2819]: E1212 18:19:47.201163 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:19:47.201206 kubelet[2819]: W1212 18:19:47.201193 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:19:47.201206 kubelet[2819]: E1212 18:19:47.201219 2819 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:19:47.202139 kubelet[2819]: E1212 18:19:47.202085 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:19:47.202139 kubelet[2819]: W1212 18:19:47.202107 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:19:47.202376 kubelet[2819]: E1212 18:19:47.202252 2819 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:19:47.202517 kubelet[2819]: E1212 18:19:47.202497 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:19:47.202517 kubelet[2819]: W1212 18:19:47.202511 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:19:47.203177 kubelet[2819]: E1212 18:19:47.202525 2819 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:19:47.203177 kubelet[2819]: E1212 18:19:47.203045 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:19:47.203177 kubelet[2819]: W1212 18:19:47.203057 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:19:47.203177 kubelet[2819]: E1212 18:19:47.203088 2819 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:19:47.203835 kubelet[2819]: E1212 18:19:47.203309 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:19:47.203835 kubelet[2819]: W1212 18:19:47.203318 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:19:47.203835 kubelet[2819]: E1212 18:19:47.203339 2819 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:19:47.203835 kubelet[2819]: E1212 18:19:47.203571 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:19:47.203835 kubelet[2819]: W1212 18:19:47.203580 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:19:47.203835 kubelet[2819]: E1212 18:19:47.203599 2819 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:19:47.205003 kubelet[2819]: E1212 18:19:47.203840 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:19:47.205003 kubelet[2819]: W1212 18:19:47.203854 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:19:47.205003 kubelet[2819]: E1212 18:19:47.203872 2819 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:19:47.205003 kubelet[2819]: E1212 18:19:47.204712 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:19:47.205003 kubelet[2819]: W1212 18:19:47.204726 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:19:47.205003 kubelet[2819]: E1212 18:19:47.204740 2819 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:19:47.205415 kubelet[2819]: E1212 18:19:47.205392 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:19:47.205461 kubelet[2819]: W1212 18:19:47.205424 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:19:47.205937 kubelet[2819]: E1212 18:19:47.205512 2819 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:19:47.206900 kubelet[2819]: E1212 18:19:47.206659 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:19:47.206900 kubelet[2819]: W1212 18:19:47.206676 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:19:47.207341 kubelet[2819]: E1212 18:19:47.207255 2819 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:19:47.207748 kubelet[2819]: E1212 18:19:47.207590 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:19:47.207748 kubelet[2819]: W1212 18:19:47.207606 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:19:47.208369 kubelet[2819]: E1212 18:19:47.208004 2819 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:19:47.208369 kubelet[2819]: E1212 18:19:47.208098 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:19:47.208369 kubelet[2819]: W1212 18:19:47.208108 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:19:47.208369 kubelet[2819]: E1212 18:19:47.208387 2819 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:19:47.208369 kubelet[2819]: E1212 18:19:47.208449 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:19:47.208369 kubelet[2819]: W1212 18:19:47.208458 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:19:47.210321 kubelet[2819]: E1212 18:19:47.210201 2819 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:19:47.211946 kubelet[2819]: E1212 18:19:47.210614 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:19:47.211946 kubelet[2819]: W1212 18:19:47.210633 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:19:47.211946 kubelet[2819]: E1212 18:19:47.210722 2819 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:19:47.211946 kubelet[2819]: E1212 18:19:47.210872 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:19:47.211946 kubelet[2819]: W1212 18:19:47.210880 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:19:47.211946 kubelet[2819]: E1212 18:19:47.211006 2819 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:19:47.211946 kubelet[2819]: E1212 18:19:47.211055 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:19:47.211946 kubelet[2819]: W1212 18:19:47.211064 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:19:47.211946 kubelet[2819]: E1212 18:19:47.211140 2819 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:19:47.211946 kubelet[2819]: E1212 18:19:47.211263 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:19:47.213209 kubelet[2819]: W1212 18:19:47.211270 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:19:47.213209 kubelet[2819]: E1212 18:19:47.211290 2819 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:19:47.213209 kubelet[2819]: E1212 18:19:47.211467 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:19:47.213209 kubelet[2819]: W1212 18:19:47.211475 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:19:47.213209 kubelet[2819]: E1212 18:19:47.211510 2819 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:19:47.213209 kubelet[2819]: E1212 18:19:47.211789 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:19:47.213209 kubelet[2819]: W1212 18:19:47.211802 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:19:47.213209 kubelet[2819]: E1212 18:19:47.211831 2819 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:19:47.213209 kubelet[2819]: E1212 18:19:47.212089 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:19:47.213209 kubelet[2819]: W1212 18:19:47.212099 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:19:47.217138 kubelet[2819]: E1212 18:19:47.212124 2819 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:19:47.217138 kubelet[2819]: E1212 18:19:47.212926 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:19:47.217138 kubelet[2819]: W1212 18:19:47.212942 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:19:47.217138 kubelet[2819]: E1212 18:19:47.212964 2819 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:19:47.217138 kubelet[2819]: E1212 18:19:47.213669 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:19:47.217138 kubelet[2819]: W1212 18:19:47.213686 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:19:47.217138 kubelet[2819]: E1212 18:19:47.213777 2819 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:19:47.217138 kubelet[2819]: E1212 18:19:47.214414 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:19:47.217138 kubelet[2819]: W1212 18:19:47.214433 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:19:47.217138 kubelet[2819]: E1212 18:19:47.214550 2819 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:19:47.220750 kubelet[2819]: E1212 18:19:47.215714 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:19:47.220750 kubelet[2819]: W1212 18:19:47.215733 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:19:47.220750 kubelet[2819]: E1212 18:19:47.215772 2819 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:19:47.220750 kubelet[2819]: E1212 18:19:47.216116 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:19:47.220750 kubelet[2819]: W1212 18:19:47.216129 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:19:47.220750 kubelet[2819]: E1212 18:19:47.216145 2819 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:19:47.235362 kubelet[2819]: E1212 18:19:47.235289 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:19:47.235362 kubelet[2819]: W1212 18:19:47.235317 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:19:47.235362 kubelet[2819]: E1212 18:19:47.235342 2819 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:19:47.236900 systemd[1]: Started cri-containerd-4dc99cb0aef2629fa7827fa2b5eda535479fae8d01bf322260e1f6e9944929c1.scope - libcontainer container 4dc99cb0aef2629fa7827fa2b5eda535479fae8d01bf322260e1f6e9944929c1. Dec 12 18:19:47.258000 audit: BPF prog-id=149 op=LOAD Dec 12 18:19:47.259000 audit: BPF prog-id=150 op=LOAD Dec 12 18:19:47.259000 audit[3263]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000220238 a2=98 a3=0 items=0 ppid=3249 pid=3263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:47.259000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464633939636230616566323632396661373832376661326235656461 Dec 12 18:19:47.259000 audit: BPF prog-id=150 op=UNLOAD Dec 12 18:19:47.259000 audit[3263]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3249 pid=3263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:47.259000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464633939636230616566323632396661373832376661326235656461 Dec 12 18:19:47.260000 audit: BPF prog-id=151 op=LOAD Dec 12 18:19:47.260000 audit[3263]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000220488 a2=98 a3=0 items=0 ppid=3249 pid=3263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:47.260000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464633939636230616566323632396661373832376661326235656461 Dec 12 18:19:47.260000 audit: BPF prog-id=152 op=LOAD Dec 12 18:19:47.260000 audit[3263]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000220218 a2=98 a3=0 items=0 ppid=3249 pid=3263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:47.260000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464633939636230616566323632396661373832376661326235656461 Dec 12 18:19:47.260000 audit: BPF prog-id=152 op=UNLOAD Dec 12 18:19:47.260000 audit[3263]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3249 pid=3263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:47.260000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464633939636230616566323632396661373832376661326235656461 Dec 12 18:19:47.260000 audit: BPF prog-id=151 op=UNLOAD Dec 12 18:19:47.260000 audit[3263]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3249 pid=3263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:47.260000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464633939636230616566323632396661373832376661326235656461 Dec 12 18:19:47.260000 audit: BPF prog-id=153 op=LOAD Dec 12 18:19:47.260000 audit[3263]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0002206e8 a2=98 a3=0 items=0 ppid=3249 pid=3263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:47.260000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464633939636230616566323632396661373832376661326235656461 Dec 12 18:19:47.266762 kubelet[2819]: E1212 18:19:47.266701 2819 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 12 18:19:47.268405 containerd[1616]: time="2025-12-12T18:19:47.268346084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nnwcp,Uid:f8b9549a-db0a-4cad-b113-366f006b8d72,Namespace:calico-system,Attempt:0,}" Dec 12 18:19:47.325327 containerd[1616]: time="2025-12-12T18:19:47.325118933Z" level=info msg="connecting to shim 6e0d51fadcfefda7127001d7b70ed9c060f1151732df558e439b45db3c574e52" address="unix:///run/containerd/s/6b02c001b1f60939778e7b023163988be4136c6f39e8657a671a82e581b86053" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:19:47.331728 containerd[1616]: time="2025-12-12T18:19:47.331626866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7455d79c87-7h66h,Uid:b2a9f983-dd40-4a5f-a699-2e3074e9afd2,Namespace:calico-system,Attempt:0,} returns sandbox id \"4dc99cb0aef2629fa7827fa2b5eda535479fae8d01bf322260e1f6e9944929c1\"" Dec 12 18:19:47.334461 kubelet[2819]: E1212 18:19:47.334419 2819 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 12 18:19:47.338365 containerd[1616]: time="2025-12-12T18:19:47.338219388Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 12 18:19:47.386972 systemd[1]: Started cri-containerd-6e0d51fadcfefda7127001d7b70ed9c060f1151732df558e439b45db3c574e52.scope - libcontainer container 6e0d51fadcfefda7127001d7b70ed9c060f1151732df558e439b45db3c574e52. Dec 12 18:19:47.406000 audit: BPF prog-id=154 op=LOAD Dec 12 18:19:47.408000 audit: BPF prog-id=155 op=LOAD Dec 12 18:19:47.408000 audit[3337]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=3324 pid=3337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:47.408000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665306435316661646366656664613731323730303164376237306564 Dec 12 18:19:47.408000 audit: BPF prog-id=155 op=UNLOAD Dec 12 18:19:47.408000 audit[3337]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3324 pid=3337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:47.408000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665306435316661646366656664613731323730303164376237306564 Dec 12 18:19:47.408000 audit: BPF prog-id=156 op=LOAD Dec 12 18:19:47.408000 audit[3337]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=3324 pid=3337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:47.408000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665306435316661646366656664613731323730303164376237306564 Dec 12 18:19:47.408000 audit: BPF prog-id=157 op=LOAD Dec 12 18:19:47.408000 audit[3337]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=3324 pid=3337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:47.408000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665306435316661646366656664613731323730303164376237306564 Dec 12 18:19:47.408000 audit: BPF prog-id=157 op=UNLOAD Dec 12 18:19:47.408000 audit[3337]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3324 pid=3337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:47.408000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665306435316661646366656664613731323730303164376237306564 Dec 12 18:19:47.408000 audit: BPF prog-id=156 op=UNLOAD Dec 12 18:19:47.408000 audit[3337]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3324 pid=3337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:47.408000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665306435316661646366656664613731323730303164376237306564 Dec 12 18:19:47.408000 audit: BPF prog-id=158 op=LOAD Dec 12 18:19:47.408000 audit[3337]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=3324 pid=3337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:47.408000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665306435316661646366656664613731323730303164376237306564 Dec 12 18:19:47.437866 containerd[1616]: time="2025-12-12T18:19:47.437693297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nnwcp,Uid:f8b9549a-db0a-4cad-b113-366f006b8d72,Namespace:calico-system,Attempt:0,} returns sandbox id \"6e0d51fadcfefda7127001d7b70ed9c060f1151732df558e439b45db3c574e52\"" Dec 12 18:19:47.442135 kubelet[2819]: E1212 18:19:47.442002 2819 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 12 18:19:47.749000 audit[3365]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=3365 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:19:47.749000 audit[3365]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffdd949b3d0 a2=0 a3=7ffdd949b3bc items=0 ppid=2939 pid=3365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:47.749000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:19:47.754000 audit[3365]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=3365 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:19:47.754000 audit[3365]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdd949b3d0 a2=0 a3=0 items=0 ppid=2939 pid=3365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:47.754000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:19:48.127563 kubelet[2819]: E1212 18:19:48.127364 2819 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5kbmx" podUID="3530bcd5-7985-42ba-8587-569180a87a41" Dec 12 18:19:48.797206 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2452024897.mount: Deactivated successfully. Dec 12 18:19:50.128405 kubelet[2819]: E1212 18:19:50.128345 2819 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5kbmx" podUID="3530bcd5-7985-42ba-8587-569180a87a41" Dec 12 18:19:50.194064 containerd[1616]: time="2025-12-12T18:19:50.193990205Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:19:50.195816 containerd[1616]: time="2025-12-12T18:19:50.195452559Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Dec 12 18:19:50.197184 containerd[1616]: time="2025-12-12T18:19:50.197134467Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:19:50.234787 containerd[1616]: time="2025-12-12T18:19:50.234718888Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:19:50.236717 containerd[1616]: time="2025-12-12T18:19:50.236089674Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.897766463s" Dec 12 18:19:50.236717 containerd[1616]: time="2025-12-12T18:19:50.236209545Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Dec 12 18:19:50.239549 containerd[1616]: time="2025-12-12T18:19:50.238750918Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 12 18:19:50.271509 containerd[1616]: time="2025-12-12T18:19:50.271450306Z" level=info msg="CreateContainer within sandbox \"4dc99cb0aef2629fa7827fa2b5eda535479fae8d01bf322260e1f6e9944929c1\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 12 18:19:50.353540 containerd[1616]: time="2025-12-12T18:19:50.352017049Z" level=info msg="Container da313e031ec07a5e6675f3a1f66e0c9d83136b9f3c04117f0d0134d8e277f881: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:19:50.361647 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3010751936.mount: Deactivated successfully. Dec 12 18:19:50.517946 containerd[1616]: time="2025-12-12T18:19:50.517869513Z" level=info msg="CreateContainer within sandbox \"4dc99cb0aef2629fa7827fa2b5eda535479fae8d01bf322260e1f6e9944929c1\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"da313e031ec07a5e6675f3a1f66e0c9d83136b9f3c04117f0d0134d8e277f881\"" Dec 12 18:19:50.519150 containerd[1616]: time="2025-12-12T18:19:50.519069217Z" level=info msg="StartContainer for \"da313e031ec07a5e6675f3a1f66e0c9d83136b9f3c04117f0d0134d8e277f881\"" Dec 12 18:19:50.521136 containerd[1616]: time="2025-12-12T18:19:50.521022396Z" level=info msg="connecting to shim da313e031ec07a5e6675f3a1f66e0c9d83136b9f3c04117f0d0134d8e277f881" address="unix:///run/containerd/s/a0b7b673dd1e3cec478a35c637083f886f5dc2ba1295050de98b970796aeea50" protocol=ttrpc version=3 Dec 12 18:19:50.563931 systemd[1]: Started cri-containerd-da313e031ec07a5e6675f3a1f66e0c9d83136b9f3c04117f0d0134d8e277f881.scope - libcontainer container da313e031ec07a5e6675f3a1f66e0c9d83136b9f3c04117f0d0134d8e277f881. Dec 12 18:19:50.593000 audit: BPF prog-id=159 op=LOAD Dec 12 18:19:50.594577 kernel: kauditd_printk_skb: 64 callbacks suppressed Dec 12 18:19:50.594678 kernel: audit: type=1334 audit(1765563590.593:551): prog-id=159 op=LOAD Dec 12 18:19:50.597000 audit: BPF prog-id=160 op=LOAD Dec 12 18:19:50.602534 kernel: audit: type=1334 audit(1765563590.597:552): prog-id=160 op=LOAD Dec 12 18:19:50.597000 audit[3376]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=3249 pid=3376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:50.611538 kernel: audit: type=1300 audit(1765563590.597:552): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=3249 pid=3376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:50.611685 kernel: audit: type=1327 audit(1765563590.597:552): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461333133653033316563303761356536363735663361316636366530 Dec 12 18:19:50.597000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461333133653033316563303761356536363735663361316636366530 Dec 12 18:19:50.598000 audit: BPF prog-id=160 op=UNLOAD Dec 12 18:19:50.619056 kernel: audit: type=1334 audit(1765563590.598:553): prog-id=160 op=UNLOAD Dec 12 18:19:50.598000 audit[3376]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3249 pid=3376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:50.633592 kernel: audit: type=1300 audit(1765563590.598:553): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3249 pid=3376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:50.598000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461333133653033316563303761356536363735663361316636366530 Dec 12 18:19:50.598000 audit: BPF prog-id=161 op=LOAD Dec 12 18:19:50.651018 kernel: audit: type=1327 audit(1765563590.598:553): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461333133653033316563303761356536363735663361316636366530 Dec 12 18:19:50.651208 kernel: audit: type=1334 audit(1765563590.598:554): prog-id=161 op=LOAD Dec 12 18:19:50.598000 audit[3376]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3249 pid=3376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:50.661721 kernel: audit: type=1300 audit(1765563590.598:554): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3249 pid=3376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:50.598000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461333133653033316563303761356536363735663361316636366530 Dec 12 18:19:50.669618 kernel: audit: type=1327 audit(1765563590.598:554): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461333133653033316563303761356536363735663361316636366530 Dec 12 18:19:50.598000 audit: BPF prog-id=162 op=LOAD Dec 12 18:19:50.598000 audit[3376]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3249 pid=3376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:50.598000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461333133653033316563303761356536363735663361316636366530 Dec 12 18:19:50.598000 audit: BPF prog-id=162 op=UNLOAD Dec 12 18:19:50.598000 audit[3376]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3249 pid=3376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:50.598000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461333133653033316563303761356536363735663361316636366530 Dec 12 18:19:50.598000 audit: BPF prog-id=161 op=UNLOAD Dec 12 18:19:50.598000 audit[3376]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3249 pid=3376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:50.598000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461333133653033316563303761356536363735663361316636366530 Dec 12 18:19:50.598000 audit: BPF prog-id=163 op=LOAD Dec 12 18:19:50.598000 audit[3376]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=3249 pid=3376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:50.598000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461333133653033316563303761356536363735663361316636366530 Dec 12 18:19:50.835955 containerd[1616]: time="2025-12-12T18:19:50.835767217Z" level=info msg="StartContainer for \"da313e031ec07a5e6675f3a1f66e0c9d83136b9f3c04117f0d0134d8e277f881\" returns successfully" Dec 12 18:19:51.318521 kubelet[2819]: E1212 18:19:51.318446 2819 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 12 18:19:51.367377 kubelet[2819]: I1212 18:19:51.367290 2819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7455d79c87-7h66h" podStartSLOduration=2.466897508 podStartE2EDuration="5.367248789s" podCreationTimestamp="2025-12-12 18:19:46 +0000 UTC" firstStartedPulling="2025-12-12 18:19:47.33773781 +0000 UTC m=+25.449705467" lastFinishedPulling="2025-12-12 18:19:50.238089092 +0000 UTC m=+28.350056748" observedRunningTime="2025-12-12 18:19:51.347693465 +0000 UTC m=+29.459661128" watchObservedRunningTime="2025-12-12 18:19:51.367248789 +0000 UTC m=+29.479216455" Dec 12 18:19:51.392448 kubelet[2819]: E1212 18:19:51.392215 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:19:51.392448 kubelet[2819]: W1212 18:19:51.392449 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:19:51.392962 kubelet[2819]: E1212 18:19:51.392920 2819 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:19:51.393865 kubelet[2819]: E1212 18:19:51.393830 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:19:51.393865 kubelet[2819]: W1212 18:19:51.393862 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:19:51.394435 kubelet[2819]: E1212 18:19:51.393893 2819 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:19:51.394691 kubelet[2819]: E1212 18:19:51.394434 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:19:51.394691 kubelet[2819]: W1212 18:19:51.394453 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:19:51.394691 kubelet[2819]: E1212 18:19:51.394499 2819 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:19:51.395031 kubelet[2819]: E1212 18:19:51.395012 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:19:51.395031 kubelet[2819]: W1212 18:19:51.395029 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:19:51.395471 kubelet[2819]: E1212 18:19:51.395047 2819 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:19:51.395471 kubelet[2819]: E1212 18:19:51.395281 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:19:51.395471 kubelet[2819]: W1212 18:19:51.395290 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:19:51.395471 kubelet[2819]: E1212 18:19:51.395302 2819 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:19:51.396014 kubelet[2819]: E1212 18:19:51.395571 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:19:51.396014 kubelet[2819]: W1212 18:19:51.395586 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:19:51.396014 kubelet[2819]: E1212 18:19:51.395601 2819 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:19:51.396014 kubelet[2819]: E1212 18:19:51.395846 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:19:51.396014 kubelet[2819]: W1212 18:19:51.395858 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:19:51.396014 kubelet[2819]: E1212 18:19:51.395874 2819 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:19:51.396438 kubelet[2819]: E1212 18:19:51.396091 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:19:51.396438 kubelet[2819]: W1212 18:19:51.396102 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:19:51.396438 kubelet[2819]: E1212 18:19:51.396115 2819 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:19:51.396438 kubelet[2819]: E1212 18:19:51.396382 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:19:51.396438 kubelet[2819]: W1212 18:19:51.396397 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:19:51.396438 kubelet[2819]: E1212 18:19:51.396412 2819 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:19:51.397993 kubelet[2819]: E1212 18:19:51.396665 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:19:51.397993 kubelet[2819]: W1212 18:19:51.396679 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:19:51.397993 kubelet[2819]: E1212 18:19:51.396693 2819 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:19:51.397993 kubelet[2819]: E1212 18:19:51.396913 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:19:51.397993 kubelet[2819]: W1212 18:19:51.396982 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:19:51.397993 kubelet[2819]: E1212 18:19:51.397002 2819 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:19:51.397993 kubelet[2819]: E1212 18:19:51.397229 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:19:51.397993 kubelet[2819]: W1212 18:19:51.397241 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:19:51.397993 kubelet[2819]: E1212 18:19:51.397255 2819 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:19:51.397993 kubelet[2819]: E1212 18:19:51.397518 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:19:51.398813 kubelet[2819]: W1212 18:19:51.397532 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:19:51.398813 kubelet[2819]: E1212 18:19:51.397549 2819 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:19:51.398813 kubelet[2819]: E1212 18:19:51.397786 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:19:51.398813 kubelet[2819]: W1212 18:19:51.397800 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:19:51.398813 kubelet[2819]: E1212 18:19:51.397814 2819 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:19:51.398813 kubelet[2819]: E1212 18:19:51.398065 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:19:51.398813 kubelet[2819]: W1212 18:19:51.398078 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:19:51.398813 kubelet[2819]: E1212 18:19:51.398097 2819 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:19:51.403000 audit[3422]: NETFILTER_CFG table=filter:119 family=2 entries=21 op=nft_register_rule pid=3422 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:19:51.403000 audit[3422]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff6df98cd0 a2=0 a3=7fff6df98cbc items=0 ppid=2939 pid=3422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:51.403000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:19:51.407000 audit[3422]: NETFILTER_CFG table=nat:120 family=2 entries=19 op=nft_register_chain pid=3422 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:19:51.407000 audit[3422]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7fff6df98cd0 a2=0 a3=7fff6df98cbc items=0 ppid=2939 pid=3422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:51.407000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:19:51.441336 kubelet[2819]: E1212 18:19:51.441276 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:19:51.441844 kubelet[2819]: W1212 18:19:51.441617 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:19:51.441844 kubelet[2819]: E1212 18:19:51.441668 2819 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:19:51.442498 kubelet[2819]: E1212 18:19:51.442393 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:19:51.442498 kubelet[2819]: W1212 18:19:51.442411 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:19:51.442498 kubelet[2819]: E1212 18:19:51.442435 2819 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:19:51.442978 kubelet[2819]: E1212 18:19:51.442948 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:19:51.442978 kubelet[2819]: W1212 18:19:51.442973 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:19:51.443254 kubelet[2819]: E1212 18:19:51.443009 2819 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:19:51.443356 kubelet[2819]: E1212 18:19:51.443334 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:19:51.443356 kubelet[2819]: W1212 18:19:51.443351 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:19:51.443541 kubelet[2819]: E1212 18:19:51.443388 2819 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:19:51.443800 kubelet[2819]: E1212 18:19:51.443777 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:19:51.443884 kubelet[2819]: W1212 18:19:51.443799 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:19:51.443884 kubelet[2819]: E1212 18:19:51.443837 2819 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:19:51.444210 kubelet[2819]: E1212 18:19:51.444191 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:19:51.444210 kubelet[2819]: W1212 18:19:51.444209 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:19:51.444609 kubelet[2819]: E1212 18:19:51.444420 2819 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:19:51.444777 kubelet[2819]: E1212 18:19:51.444762 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:19:51.444777 kubelet[2819]: W1212 18:19:51.444777 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:19:51.444963 kubelet[2819]: E1212 18:19:51.444844 2819 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:19:51.445274 kubelet[2819]: E1212 18:19:51.445182 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:19:51.445274 kubelet[2819]: W1212 18:19:51.445200 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:19:51.445589 kubelet[2819]: E1212 18:19:51.445564 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:19:51.445589 kubelet[2819]: W1212 18:19:51.445582 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:19:51.445836 kubelet[2819]: E1212 18:19:51.445598 2819 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:19:51.445836 kubelet[2819]: E1212 18:19:51.445568 2819 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:19:51.445836 kubelet[2819]: E1212 18:19:51.445813 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:19:51.445836 kubelet[2819]: W1212 18:19:51.445823 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:19:51.446294 kubelet[2819]: E1212 18:19:51.445848 2819 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:19:51.446294 kubelet[2819]: E1212 18:19:51.446096 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:19:51.446294 kubelet[2819]: W1212 18:19:51.446109 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:19:51.446294 kubelet[2819]: E1212 18:19:51.446132 2819 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:19:51.446866 kubelet[2819]: E1212 18:19:51.446773 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:19:51.446866 kubelet[2819]: W1212 18:19:51.446809 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:19:51.446866 kubelet[2819]: E1212 18:19:51.446838 2819 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:19:51.447163 kubelet[2819]: E1212 18:19:51.447136 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:19:51.447163 kubelet[2819]: W1212 18:19:51.447153 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:19:51.447301 kubelet[2819]: E1212 18:19:51.447168 2819 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:19:51.447397 kubelet[2819]: E1212 18:19:51.447382 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:19:51.447397 kubelet[2819]: W1212 18:19:51.447396 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:19:51.447578 kubelet[2819]: E1212 18:19:51.447421 2819 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:19:51.447794 kubelet[2819]: E1212 18:19:51.447733 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:19:51.447794 kubelet[2819]: W1212 18:19:51.447745 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:19:51.447794 kubelet[2819]: E1212 18:19:51.447759 2819 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:19:51.448068 kubelet[2819]: E1212 18:19:51.448038 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:19:51.448068 kubelet[2819]: W1212 18:19:51.448047 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:19:51.448068 kubelet[2819]: E1212 18:19:51.448064 2819 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:19:51.449057 kubelet[2819]: E1212 18:19:51.448916 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:19:51.449057 kubelet[2819]: W1212 18:19:51.448933 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:19:51.449057 kubelet[2819]: E1212 18:19:51.448951 2819 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:19:51.449218 kubelet[2819]: E1212 18:19:51.449208 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:19:51.449269 kubelet[2819]: W1212 18:19:51.449260 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:19:51.449333 kubelet[2819]: E1212 18:19:51.449309 2819 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:19:51.835107 containerd[1616]: time="2025-12-12T18:19:51.835010481Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:19:51.837546 containerd[1616]: time="2025-12-12T18:19:51.837213290Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Dec 12 18:19:51.839099 containerd[1616]: time="2025-12-12T18:19:51.839013717Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:19:51.842356 containerd[1616]: time="2025-12-12T18:19:51.842169907Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:19:51.843473 containerd[1616]: time="2025-12-12T18:19:51.843300982Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.604498294s" Dec 12 18:19:51.843473 containerd[1616]: time="2025-12-12T18:19:51.843350760Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Dec 12 18:19:51.849079 containerd[1616]: time="2025-12-12T18:19:51.849030603Z" level=info msg="CreateContainer within sandbox \"6e0d51fadcfefda7127001d7b70ed9c060f1151732df558e439b45db3c574e52\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 12 18:19:51.865509 containerd[1616]: time="2025-12-12T18:19:51.863798376Z" level=info msg="Container 7829e1ed3901c733a3c23bc10a8b280598d07e7a74d2137550a25aed9fc27db8: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:19:51.873190 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3356605248.mount: Deactivated successfully. Dec 12 18:19:51.882639 containerd[1616]: time="2025-12-12T18:19:51.882559652Z" level=info msg="CreateContainer within sandbox \"6e0d51fadcfefda7127001d7b70ed9c060f1151732df558e439b45db3c574e52\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"7829e1ed3901c733a3c23bc10a8b280598d07e7a74d2137550a25aed9fc27db8\"" Dec 12 18:19:51.884425 containerd[1616]: time="2025-12-12T18:19:51.884329216Z" level=info msg="StartContainer for \"7829e1ed3901c733a3c23bc10a8b280598d07e7a74d2137550a25aed9fc27db8\"" Dec 12 18:19:51.889756 containerd[1616]: time="2025-12-12T18:19:51.889680970Z" level=info msg="connecting to shim 7829e1ed3901c733a3c23bc10a8b280598d07e7a74d2137550a25aed9fc27db8" address="unix:///run/containerd/s/6b02c001b1f60939778e7b023163988be4136c6f39e8657a671a82e581b86053" protocol=ttrpc version=3 Dec 12 18:19:51.939922 systemd[1]: Started cri-containerd-7829e1ed3901c733a3c23bc10a8b280598d07e7a74d2137550a25aed9fc27db8.scope - libcontainer container 7829e1ed3901c733a3c23bc10a8b280598d07e7a74d2137550a25aed9fc27db8. Dec 12 18:19:52.016000 audit: BPF prog-id=164 op=LOAD Dec 12 18:19:52.016000 audit[3460]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=3324 pid=3460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:52.016000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738323965316564333930316337333361336332336263313061386232 Dec 12 18:19:52.017000 audit: BPF prog-id=165 op=LOAD Dec 12 18:19:52.017000 audit[3460]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=3324 pid=3460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:52.017000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738323965316564333930316337333361336332336263313061386232 Dec 12 18:19:52.017000 audit: BPF prog-id=165 op=UNLOAD Dec 12 18:19:52.017000 audit[3460]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3324 pid=3460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:52.017000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738323965316564333930316337333361336332336263313061386232 Dec 12 18:19:52.017000 audit: BPF prog-id=164 op=UNLOAD Dec 12 18:19:52.017000 audit[3460]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3324 pid=3460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:52.017000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738323965316564333930316337333361336332336263313061386232 Dec 12 18:19:52.017000 audit: BPF prog-id=166 op=LOAD Dec 12 18:19:52.017000 audit[3460]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=3324 pid=3460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:52.017000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738323965316564333930316337333361336332336263313061386232 Dec 12 18:19:52.056863 containerd[1616]: time="2025-12-12T18:19:52.056791691Z" level=info msg="StartContainer for \"7829e1ed3901c733a3c23bc10a8b280598d07e7a74d2137550a25aed9fc27db8\" returns successfully" Dec 12 18:19:52.076182 systemd[1]: cri-containerd-7829e1ed3901c733a3c23bc10a8b280598d07e7a74d2137550a25aed9fc27db8.scope: Deactivated successfully. Dec 12 18:19:52.080000 audit: BPF prog-id=166 op=UNLOAD Dec 12 18:19:52.105549 containerd[1616]: time="2025-12-12T18:19:52.105220656Z" level=info msg="received container exit event container_id:\"7829e1ed3901c733a3c23bc10a8b280598d07e7a74d2137550a25aed9fc27db8\" id:\"7829e1ed3901c733a3c23bc10a8b280598d07e7a74d2137550a25aed9fc27db8\" pid:3473 exited_at:{seconds:1765563592 nanos:81335940}" Dec 12 18:19:52.127697 kubelet[2819]: E1212 18:19:52.127644 2819 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5kbmx" podUID="3530bcd5-7985-42ba-8587-569180a87a41" Dec 12 18:19:52.167094 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7829e1ed3901c733a3c23bc10a8b280598d07e7a74d2137550a25aed9fc27db8-rootfs.mount: Deactivated successfully. Dec 12 18:19:52.325303 kubelet[2819]: E1212 18:19:52.325182 2819 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 12 18:19:52.326396 kubelet[2819]: E1212 18:19:52.326018 2819 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 12 18:19:52.330765 containerd[1616]: time="2025-12-12T18:19:52.330718652Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 12 18:19:53.328509 kubelet[2819]: E1212 18:19:53.327806 2819 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 12 18:19:54.126739 kubelet[2819]: E1212 18:19:54.126675 2819 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5kbmx" podUID="3530bcd5-7985-42ba-8587-569180a87a41" Dec 12 18:19:55.857896 containerd[1616]: time="2025-12-12T18:19:55.857746776Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:19:55.859970 containerd[1616]: time="2025-12-12T18:19:55.859911372Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Dec 12 18:19:55.861201 containerd[1616]: time="2025-12-12T18:19:55.861090406Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:19:55.865518 containerd[1616]: time="2025-12-12T18:19:55.864582357Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:19:55.866034 containerd[1616]: time="2025-12-12T18:19:55.865980997Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.534957684s" Dec 12 18:19:55.866152 containerd[1616]: time="2025-12-12T18:19:55.866040970Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Dec 12 18:19:55.869790 containerd[1616]: time="2025-12-12T18:19:55.869707364Z" level=info msg="CreateContainer within sandbox \"6e0d51fadcfefda7127001d7b70ed9c060f1151732df558e439b45db3c574e52\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 12 18:19:55.924526 containerd[1616]: time="2025-12-12T18:19:55.919804014Z" level=info msg="Container 669bc022d8edf0066940561b5862caa2b11a44ddccb1a94bd8405d48cd4db381: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:19:55.948081 containerd[1616]: time="2025-12-12T18:19:55.948007435Z" level=info msg="CreateContainer within sandbox \"6e0d51fadcfefda7127001d7b70ed9c060f1151732df558e439b45db3c574e52\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"669bc022d8edf0066940561b5862caa2b11a44ddccb1a94bd8405d48cd4db381\"" Dec 12 18:19:55.950746 containerd[1616]: time="2025-12-12T18:19:55.949529591Z" level=info msg="StartContainer for \"669bc022d8edf0066940561b5862caa2b11a44ddccb1a94bd8405d48cd4db381\"" Dec 12 18:19:55.953508 containerd[1616]: time="2025-12-12T18:19:55.953326081Z" level=info msg="connecting to shim 669bc022d8edf0066940561b5862caa2b11a44ddccb1a94bd8405d48cd4db381" address="unix:///run/containerd/s/6b02c001b1f60939778e7b023163988be4136c6f39e8657a671a82e581b86053" protocol=ttrpc version=3 Dec 12 18:19:55.993960 systemd[1]: Started cri-containerd-669bc022d8edf0066940561b5862caa2b11a44ddccb1a94bd8405d48cd4db381.scope - libcontainer container 669bc022d8edf0066940561b5862caa2b11a44ddccb1a94bd8405d48cd4db381. Dec 12 18:19:56.070000 audit: BPF prog-id=167 op=LOAD Dec 12 18:19:56.072364 kernel: kauditd_printk_skb: 34 callbacks suppressed Dec 12 18:19:56.072573 kernel: audit: type=1334 audit(1765563596.070:567): prog-id=167 op=LOAD Dec 12 18:19:56.070000 audit[3517]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3324 pid=3517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:56.078736 kernel: audit: type=1300 audit(1765563596.070:567): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3324 pid=3517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:56.070000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636396263303232643865646630303636393430353631623538363263 Dec 12 18:19:56.070000 audit: BPF prog-id=168 op=LOAD Dec 12 18:19:56.093702 kernel: audit: type=1327 audit(1765563596.070:567): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636396263303232643865646630303636393430353631623538363263 Dec 12 18:19:56.093838 kernel: audit: type=1334 audit(1765563596.070:568): prog-id=168 op=LOAD Dec 12 18:19:56.070000 audit[3517]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3324 pid=3517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:56.098002 kernel: audit: type=1300 audit(1765563596.070:568): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3324 pid=3517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:56.070000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636396263303232643865646630303636393430353631623538363263 Dec 12 18:19:56.106344 kernel: audit: type=1327 audit(1765563596.070:568): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636396263303232643865646630303636393430353631623538363263 Dec 12 18:19:56.070000 audit: BPF prog-id=168 op=UNLOAD Dec 12 18:19:56.115541 kernel: audit: type=1334 audit(1765563596.070:569): prog-id=168 op=UNLOAD Dec 12 18:19:56.070000 audit[3517]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3324 pid=3517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:56.126588 kernel: audit: type=1300 audit(1765563596.070:569): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3324 pid=3517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:56.128371 kubelet[2819]: E1212 18:19:56.128306 2819 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5kbmx" podUID="3530bcd5-7985-42ba-8587-569180a87a41" Dec 12 18:19:56.070000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636396263303232643865646630303636393430353631623538363263 Dec 12 18:19:56.070000 audit: BPF prog-id=167 op=UNLOAD Dec 12 18:19:56.139224 kernel: audit: type=1327 audit(1765563596.070:569): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636396263303232643865646630303636393430353631623538363263 Dec 12 18:19:56.139335 kernel: audit: type=1334 audit(1765563596.070:570): prog-id=167 op=UNLOAD Dec 12 18:19:56.070000 audit[3517]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3324 pid=3517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:56.070000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636396263303232643865646630303636393430353631623538363263 Dec 12 18:19:56.070000 audit: BPF prog-id=169 op=LOAD Dec 12 18:19:56.070000 audit[3517]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3324 pid=3517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:19:56.070000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636396263303232643865646630303636393430353631623538363263 Dec 12 18:19:56.167320 containerd[1616]: time="2025-12-12T18:19:56.167264500Z" level=info msg="StartContainer for \"669bc022d8edf0066940561b5862caa2b11a44ddccb1a94bd8405d48cd4db381\" returns successfully" Dec 12 18:19:56.350830 kubelet[2819]: E1212 18:19:56.350767 2819 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 12 18:19:57.013940 systemd[1]: cri-containerd-669bc022d8edf0066940561b5862caa2b11a44ddccb1a94bd8405d48cd4db381.scope: Deactivated successfully. Dec 12 18:19:57.014437 systemd[1]: cri-containerd-669bc022d8edf0066940561b5862caa2b11a44ddccb1a94bd8405d48cd4db381.scope: Consumed 770ms CPU time, 167.1M memory peak, 14.7M read from disk, 171.3M written to disk. Dec 12 18:19:57.017000 audit: BPF prog-id=169 op=UNLOAD Dec 12 18:19:57.024038 containerd[1616]: time="2025-12-12T18:19:57.023989819Z" level=info msg="received container exit event container_id:\"669bc022d8edf0066940561b5862caa2b11a44ddccb1a94bd8405d48cd4db381\" id:\"669bc022d8edf0066940561b5862caa2b11a44ddccb1a94bd8405d48cd4db381\" pid:3530 exited_at:{seconds:1765563597 nanos:21189085}" Dec 12 18:19:57.096662 kubelet[2819]: I1212 18:19:57.096343 2819 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 12 18:19:57.147101 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-669bc022d8edf0066940561b5862caa2b11a44ddccb1a94bd8405d48cd4db381-rootfs.mount: Deactivated successfully. Dec 12 18:19:57.226033 systemd[1]: Created slice kubepods-besteffort-pod88464bd3_9403_4901_97b2_3cffb941f328.slice - libcontainer container kubepods-besteffort-pod88464bd3_9403_4901_97b2_3cffb941f328.slice. Dec 12 18:19:57.246784 systemd[1]: Created slice kubepods-burstable-pod516ed3cc_2563_4682_9bb4_937befb1cd30.slice - libcontainer container kubepods-burstable-pod516ed3cc_2563_4682_9bb4_937befb1cd30.slice. Dec 12 18:19:57.278363 systemd[1]: Created slice kubepods-burstable-podf4950266_b324_4bd8_9271_ead6b00ca6f0.slice - libcontainer container kubepods-burstable-podf4950266_b324_4bd8_9271_ead6b00ca6f0.slice. Dec 12 18:19:57.298066 systemd[1]: Created slice kubepods-besteffort-podab0029a5_8491_42f1_b060_fef0c0422b49.slice - libcontainer container kubepods-besteffort-podab0029a5_8491_42f1_b060_fef0c0422b49.slice. Dec 12 18:19:57.311246 kubelet[2819]: I1212 18:19:57.311071 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58d48\" (UniqueName: \"kubernetes.io/projected/516ed3cc-2563-4682-9bb4-937befb1cd30-kube-api-access-58d48\") pod \"coredns-668d6bf9bc-bhjvt\" (UID: \"516ed3cc-2563-4682-9bb4-937befb1cd30\") " pod="kube-system/coredns-668d6bf9bc-bhjvt" Dec 12 18:19:57.311246 kubelet[2819]: I1212 18:19:57.311137 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s99m7\" (UniqueName: \"kubernetes.io/projected/f4c646c7-47f1-433d-b7c4-005cccecda6a-kube-api-access-s99m7\") pod \"calico-apiserver-9bb959468-v58pb\" (UID: \"f4c646c7-47f1-433d-b7c4-005cccecda6a\") " pod="calico-apiserver/calico-apiserver-9bb959468-v58pb" Dec 12 18:19:57.311246 kubelet[2819]: I1212 18:19:57.311174 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8d5k\" (UniqueName: \"kubernetes.io/projected/ab0029a5-8491-42f1-b060-fef0c0422b49-kube-api-access-f8d5k\") pod \"calico-apiserver-9bb959468-57r44\" (UID: \"ab0029a5-8491-42f1-b060-fef0c0422b49\") " pod="calico-apiserver/calico-apiserver-9bb959468-57r44" Dec 12 18:19:57.313738 kubelet[2819]: I1212 18:19:57.312531 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f4950266-b324-4bd8-9271-ead6b00ca6f0-config-volume\") pod \"coredns-668d6bf9bc-q4kjx\" (UID: \"f4950266-b324-4bd8-9271-ead6b00ca6f0\") " pod="kube-system/coredns-668d6bf9bc-q4kjx" Dec 12 18:19:57.313738 kubelet[2819]: I1212 18:19:57.312615 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkp5f\" (UniqueName: \"kubernetes.io/projected/f4950266-b324-4bd8-9271-ead6b00ca6f0-kube-api-access-rkp5f\") pod \"coredns-668d6bf9bc-q4kjx\" (UID: \"f4950266-b324-4bd8-9271-ead6b00ca6f0\") " pod="kube-system/coredns-668d6bf9bc-q4kjx" Dec 12 18:19:57.313738 kubelet[2819]: I1212 18:19:57.312656 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c57efa3a-e82c-436b-9c07-8cf6921dcd5d-goldmane-ca-bundle\") pod \"goldmane-666569f655-h7lvc\" (UID: \"c57efa3a-e82c-436b-9c07-8cf6921dcd5d\") " pod="calico-system/goldmane-666569f655-h7lvc" Dec 12 18:19:57.313738 kubelet[2819]: I1212 18:19:57.312687 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-448p8\" (UniqueName: \"kubernetes.io/projected/c57efa3a-e82c-436b-9c07-8cf6921dcd5d-kube-api-access-448p8\") pod \"goldmane-666569f655-h7lvc\" (UID: \"c57efa3a-e82c-436b-9c07-8cf6921dcd5d\") " pod="calico-system/goldmane-666569f655-h7lvc" Dec 12 18:19:57.313738 kubelet[2819]: I1212 18:19:57.312716 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f4c646c7-47f1-433d-b7c4-005cccecda6a-calico-apiserver-certs\") pod \"calico-apiserver-9bb959468-v58pb\" (UID: \"f4c646c7-47f1-433d-b7c4-005cccecda6a\") " pod="calico-apiserver/calico-apiserver-9bb959468-v58pb" Dec 12 18:19:57.315620 kubelet[2819]: I1212 18:19:57.312756 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ed1bf1c8-646f-4c33-9642-90a577c1d786-calico-apiserver-certs\") pod \"calico-apiserver-6f9c8d5fbb-p96pq\" (UID: \"ed1bf1c8-646f-4c33-9642-90a577c1d786\") " pod="calico-apiserver/calico-apiserver-6f9c8d5fbb-p96pq" Dec 12 18:19:57.315620 kubelet[2819]: I1212 18:19:57.312781 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftxcm\" (UniqueName: \"kubernetes.io/projected/ed1bf1c8-646f-4c33-9642-90a577c1d786-kube-api-access-ftxcm\") pod \"calico-apiserver-6f9c8d5fbb-p96pq\" (UID: \"ed1bf1c8-646f-4c33-9642-90a577c1d786\") " pod="calico-apiserver/calico-apiserver-6f9c8d5fbb-p96pq" Dec 12 18:19:57.315620 kubelet[2819]: I1212 18:19:57.312805 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c57efa3a-e82c-436b-9c07-8cf6921dcd5d-config\") pod \"goldmane-666569f655-h7lvc\" (UID: \"c57efa3a-e82c-436b-9c07-8cf6921dcd5d\") " pod="calico-system/goldmane-666569f655-h7lvc" Dec 12 18:19:57.315620 kubelet[2819]: I1212 18:19:57.312833 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ab0029a5-8491-42f1-b060-fef0c0422b49-calico-apiserver-certs\") pod \"calico-apiserver-9bb959468-57r44\" (UID: \"ab0029a5-8491-42f1-b060-fef0c0422b49\") " pod="calico-apiserver/calico-apiserver-9bb959468-57r44" Dec 12 18:19:57.315620 kubelet[2819]: I1212 18:19:57.312856 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/88464bd3-9403-4901-97b2-3cffb941f328-tigera-ca-bundle\") pod \"calico-kube-controllers-7fcbf96c45-vldxn\" (UID: \"88464bd3-9403-4901-97b2-3cffb941f328\") " pod="calico-system/calico-kube-controllers-7fcbf96c45-vldxn" Dec 12 18:19:57.315378 systemd[1]: Created slice kubepods-besteffort-podf4c646c7_47f1_433d_b7c4_005cccecda6a.slice - libcontainer container kubepods-besteffort-podf4c646c7_47f1_433d_b7c4_005cccecda6a.slice. Dec 12 18:19:57.316725 kubelet[2819]: I1212 18:19:57.312883 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/c57efa3a-e82c-436b-9c07-8cf6921dcd5d-goldmane-key-pair\") pod \"goldmane-666569f655-h7lvc\" (UID: \"c57efa3a-e82c-436b-9c07-8cf6921dcd5d\") " pod="calico-system/goldmane-666569f655-h7lvc" Dec 12 18:19:57.316725 kubelet[2819]: I1212 18:19:57.312912 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkj66\" (UniqueName: \"kubernetes.io/projected/88464bd3-9403-4901-97b2-3cffb941f328-kube-api-access-mkj66\") pod \"calico-kube-controllers-7fcbf96c45-vldxn\" (UID: \"88464bd3-9403-4901-97b2-3cffb941f328\") " pod="calico-system/calico-kube-controllers-7fcbf96c45-vldxn" Dec 12 18:19:57.316725 kubelet[2819]: I1212 18:19:57.312945 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/516ed3cc-2563-4682-9bb4-937befb1cd30-config-volume\") pod \"coredns-668d6bf9bc-bhjvt\" (UID: \"516ed3cc-2563-4682-9bb4-937befb1cd30\") " pod="kube-system/coredns-668d6bf9bc-bhjvt" Dec 12 18:19:57.339345 systemd[1]: Created slice kubepods-besteffort-poded1bf1c8_646f_4c33_9642_90a577c1d786.slice - libcontainer container kubepods-besteffort-poded1bf1c8_646f_4c33_9642_90a577c1d786.slice. Dec 12 18:19:57.366447 systemd[1]: Created slice kubepods-besteffort-pod942e71a6_e303_45bb_a6a7_005da5952aa7.slice - libcontainer container kubepods-besteffort-pod942e71a6_e303_45bb_a6a7_005da5952aa7.slice. Dec 12 18:19:57.380331 kubelet[2819]: E1212 18:19:57.380285 2819 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 12 18:19:57.385635 containerd[1616]: time="2025-12-12T18:19:57.385593224Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 12 18:19:57.390186 systemd[1]: Created slice kubepods-besteffort-podc57efa3a_e82c_436b_9c07_8cf6921dcd5d.slice - libcontainer container kubepods-besteffort-podc57efa3a_e82c_436b_9c07_8cf6921dcd5d.slice. Dec 12 18:19:57.416368 kubelet[2819]: I1212 18:19:57.416272 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/942e71a6-e303-45bb-a6a7-005da5952aa7-whisker-ca-bundle\") pod \"whisker-59db8d7565-4jn5z\" (UID: \"942e71a6-e303-45bb-a6a7-005da5952aa7\") " pod="calico-system/whisker-59db8d7565-4jn5z" Dec 12 18:19:57.417851 kubelet[2819]: I1212 18:19:57.417772 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwlpl\" (UniqueName: \"kubernetes.io/projected/942e71a6-e303-45bb-a6a7-005da5952aa7-kube-api-access-lwlpl\") pod \"whisker-59db8d7565-4jn5z\" (UID: \"942e71a6-e303-45bb-a6a7-005da5952aa7\") " pod="calico-system/whisker-59db8d7565-4jn5z" Dec 12 18:19:57.418453 kubelet[2819]: I1212 18:19:57.417830 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/942e71a6-e303-45bb-a6a7-005da5952aa7-whisker-backend-key-pair\") pod \"whisker-59db8d7565-4jn5z\" (UID: \"942e71a6-e303-45bb-a6a7-005da5952aa7\") " pod="calico-system/whisker-59db8d7565-4jn5z" Dec 12 18:19:57.587917 kubelet[2819]: E1212 18:19:57.587778 2819 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 12 18:19:57.590343 containerd[1616]: time="2025-12-12T18:19:57.590241218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-q4kjx,Uid:f4950266-b324-4bd8-9271-ead6b00ca6f0,Namespace:kube-system,Attempt:0,}" Dec 12 18:19:57.659607 containerd[1616]: time="2025-12-12T18:19:57.657853721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f9c8d5fbb-p96pq,Uid:ed1bf1c8-646f-4c33-9642-90a577c1d786,Namespace:calico-apiserver,Attempt:0,}" Dec 12 18:19:57.707871 containerd[1616]: time="2025-12-12T18:19:57.707157097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-h7lvc,Uid:c57efa3a-e82c-436b-9c07-8cf6921dcd5d,Namespace:calico-system,Attempt:0,}" Dec 12 18:19:57.840637 containerd[1616]: time="2025-12-12T18:19:57.839016684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7fcbf96c45-vldxn,Uid:88464bd3-9403-4901-97b2-3cffb941f328,Namespace:calico-system,Attempt:0,}" Dec 12 18:19:57.871028 kubelet[2819]: E1212 18:19:57.870959 2819 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 12 18:19:57.872296 containerd[1616]: time="2025-12-12T18:19:57.872231389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bhjvt,Uid:516ed3cc-2563-4682-9bb4-937befb1cd30,Namespace:kube-system,Attempt:0,}" Dec 12 18:19:57.910877 containerd[1616]: time="2025-12-12T18:19:57.910824749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9bb959468-57r44,Uid:ab0029a5-8491-42f1-b060-fef0c0422b49,Namespace:calico-apiserver,Attempt:0,}" Dec 12 18:19:57.940137 containerd[1616]: time="2025-12-12T18:19:57.938972711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9bb959468-v58pb,Uid:f4c646c7-47f1-433d-b7c4-005cccecda6a,Namespace:calico-apiserver,Attempt:0,}" Dec 12 18:19:57.975831 containerd[1616]: time="2025-12-12T18:19:57.975743570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-59db8d7565-4jn5z,Uid:942e71a6-e303-45bb-a6a7-005da5952aa7,Namespace:calico-system,Attempt:0,}" Dec 12 18:19:58.183625 containerd[1616]: time="2025-12-12T18:19:58.182452302Z" level=error msg="Failed to destroy network for sandbox \"12b6a1de65374150e59f8a0fa849646d25f96db03b8efaf9005436b6cbbd8878\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:19:58.206793 systemd[1]: run-netns-cni\x2d51b3bc1d\x2ddd8f\x2d60ea\x2deb56\x2dc35793d4027d.mount: Deactivated successfully. Dec 12 18:19:58.212728 systemd[1]: Created slice kubepods-besteffort-pod3530bcd5_7985_42ba_8587_569180a87a41.slice - libcontainer container kubepods-besteffort-pod3530bcd5_7985_42ba_8587_569180a87a41.slice. Dec 12 18:19:58.214630 containerd[1616]: time="2025-12-12T18:19:58.214505346Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-h7lvc,Uid:c57efa3a-e82c-436b-9c07-8cf6921dcd5d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"12b6a1de65374150e59f8a0fa849646d25f96db03b8efaf9005436b6cbbd8878\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:19:58.219211 kubelet[2819]: E1212 18:19:58.219073 2819 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12b6a1de65374150e59f8a0fa849646d25f96db03b8efaf9005436b6cbbd8878\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:19:58.219211 kubelet[2819]: E1212 18:19:58.219161 2819 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12b6a1de65374150e59f8a0fa849646d25f96db03b8efaf9005436b6cbbd8878\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-h7lvc" Dec 12 18:19:58.219211 kubelet[2819]: E1212 18:19:58.219192 2819 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12b6a1de65374150e59f8a0fa849646d25f96db03b8efaf9005436b6cbbd8878\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-h7lvc" Dec 12 18:19:58.220026 kubelet[2819]: E1212 18:19:58.219246 2819 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-h7lvc_calico-system(c57efa3a-e82c-436b-9c07-8cf6921dcd5d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-h7lvc_calico-system(c57efa3a-e82c-436b-9c07-8cf6921dcd5d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"12b6a1de65374150e59f8a0fa849646d25f96db03b8efaf9005436b6cbbd8878\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-h7lvc" podUID="c57efa3a-e82c-436b-9c07-8cf6921dcd5d" Dec 12 18:19:58.229464 containerd[1616]: time="2025-12-12T18:19:58.229316733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5kbmx,Uid:3530bcd5-7985-42ba-8587-569180a87a41,Namespace:calico-system,Attempt:0,}" Dec 12 18:19:58.282565 containerd[1616]: time="2025-12-12T18:19:58.281930129Z" level=error msg="Failed to destroy network for sandbox \"f11f4795426a7d003be9f33bd735d936d453eb3e65f45e1593f26b21ffba2f63\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:19:58.286062 systemd[1]: run-netns-cni\x2d03da0b59\x2d98dc\x2d56d9\x2d1ae1\x2d737c87d73853.mount: Deactivated successfully. Dec 12 18:19:58.291781 containerd[1616]: time="2025-12-12T18:19:58.290591221Z" level=error msg="Failed to destroy network for sandbox \"f5f5cb1ecb864cf416471871995f85f903ee0d0ca8c11391a854b90f147dff98\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:19:58.294090 systemd[1]: run-netns-cni\x2dc56828e3\x2d1231\x2d1ec8\x2db604\x2d37edca6f14cc.mount: Deactivated successfully. Dec 12 18:19:58.300245 containerd[1616]: time="2025-12-12T18:19:58.300116143Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7fcbf96c45-vldxn,Uid:88464bd3-9403-4901-97b2-3cffb941f328,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f11f4795426a7d003be9f33bd735d936d453eb3e65f45e1593f26b21ffba2f63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:19:58.301869 kubelet[2819]: E1212 18:19:58.300528 2819 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f11f4795426a7d003be9f33bd735d936d453eb3e65f45e1593f26b21ffba2f63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:19:58.301869 kubelet[2819]: E1212 18:19:58.300623 2819 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f11f4795426a7d003be9f33bd735d936d453eb3e65f45e1593f26b21ffba2f63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7fcbf96c45-vldxn" Dec 12 18:19:58.301869 kubelet[2819]: E1212 18:19:58.300649 2819 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f11f4795426a7d003be9f33bd735d936d453eb3e65f45e1593f26b21ffba2f63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7fcbf96c45-vldxn" Dec 12 18:19:58.302145 kubelet[2819]: E1212 18:19:58.300720 2819 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7fcbf96c45-vldxn_calico-system(88464bd3-9403-4901-97b2-3cffb941f328)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7fcbf96c45-vldxn_calico-system(88464bd3-9403-4901-97b2-3cffb941f328)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f11f4795426a7d003be9f33bd735d936d453eb3e65f45e1593f26b21ffba2f63\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7fcbf96c45-vldxn" podUID="88464bd3-9403-4901-97b2-3cffb941f328" Dec 12 18:19:58.308766 containerd[1616]: time="2025-12-12T18:19:58.308699245Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-q4kjx,Uid:f4950266-b324-4bd8-9271-ead6b00ca6f0,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5f5cb1ecb864cf416471871995f85f903ee0d0ca8c11391a854b90f147dff98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:19:58.309394 kubelet[2819]: E1212 18:19:58.309259 2819 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5f5cb1ecb864cf416471871995f85f903ee0d0ca8c11391a854b90f147dff98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:19:58.310234 kubelet[2819]: E1212 18:19:58.309664 2819 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5f5cb1ecb864cf416471871995f85f903ee0d0ca8c11391a854b90f147dff98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-q4kjx" Dec 12 18:19:58.310234 kubelet[2819]: E1212 18:19:58.310026 2819 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5f5cb1ecb864cf416471871995f85f903ee0d0ca8c11391a854b90f147dff98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-q4kjx" Dec 12 18:19:58.310588 kubelet[2819]: E1212 18:19:58.310116 2819 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-q4kjx_kube-system(f4950266-b324-4bd8-9271-ead6b00ca6f0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-q4kjx_kube-system(f4950266-b324-4bd8-9271-ead6b00ca6f0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f5f5cb1ecb864cf416471871995f85f903ee0d0ca8c11391a854b90f147dff98\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-q4kjx" podUID="f4950266-b324-4bd8-9271-ead6b00ca6f0" Dec 12 18:19:58.324525 containerd[1616]: time="2025-12-12T18:19:58.324373635Z" level=error msg="Failed to destroy network for sandbox \"74559e9e2cc5835b3481c620ce4fc4b5780e8ec56faefd66d03654b74aef6659\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:19:58.339245 containerd[1616]: time="2025-12-12T18:19:58.339119293Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f9c8d5fbb-p96pq,Uid:ed1bf1c8-646f-4c33-9642-90a577c1d786,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"74559e9e2cc5835b3481c620ce4fc4b5780e8ec56faefd66d03654b74aef6659\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:19:58.339986 kubelet[2819]: E1212 18:19:58.339471 2819 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"74559e9e2cc5835b3481c620ce4fc4b5780e8ec56faefd66d03654b74aef6659\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:19:58.339986 kubelet[2819]: E1212 18:19:58.339585 2819 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"74559e9e2cc5835b3481c620ce4fc4b5780e8ec56faefd66d03654b74aef6659\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f9c8d5fbb-p96pq" Dec 12 18:19:58.339986 kubelet[2819]: E1212 18:19:58.339622 2819 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"74559e9e2cc5835b3481c620ce4fc4b5780e8ec56faefd66d03654b74aef6659\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f9c8d5fbb-p96pq" Dec 12 18:19:58.342026 kubelet[2819]: E1212 18:19:58.340465 2819 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6f9c8d5fbb-p96pq_calico-apiserver(ed1bf1c8-646f-4c33-9642-90a577c1d786)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6f9c8d5fbb-p96pq_calico-apiserver(ed1bf1c8-646f-4c33-9642-90a577c1d786)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"74559e9e2cc5835b3481c620ce4fc4b5780e8ec56faefd66d03654b74aef6659\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6f9c8d5fbb-p96pq" podUID="ed1bf1c8-646f-4c33-9642-90a577c1d786" Dec 12 18:19:58.361661 containerd[1616]: time="2025-12-12T18:19:58.361509307Z" level=error msg="Failed to destroy network for sandbox \"53a704a189ac4b770b0945b53b987805759c31580d495cdb1c9698bc5455014a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:19:58.379517 containerd[1616]: time="2025-12-12T18:19:58.379317161Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9bb959468-v58pb,Uid:f4c646c7-47f1-433d-b7c4-005cccecda6a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"53a704a189ac4b770b0945b53b987805759c31580d495cdb1c9698bc5455014a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:19:58.381652 kubelet[2819]: E1212 18:19:58.381511 2819 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53a704a189ac4b770b0945b53b987805759c31580d495cdb1c9698bc5455014a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:19:58.381652 kubelet[2819]: E1212 18:19:58.381589 2819 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53a704a189ac4b770b0945b53b987805759c31580d495cdb1c9698bc5455014a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9bb959468-v58pb" Dec 12 18:19:58.381652 kubelet[2819]: E1212 18:19:58.381622 2819 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53a704a189ac4b770b0945b53b987805759c31580d495cdb1c9698bc5455014a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9bb959468-v58pb" Dec 12 18:19:58.382327 kubelet[2819]: E1212 18:19:58.381681 2819 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-9bb959468-v58pb_calico-apiserver(f4c646c7-47f1-433d-b7c4-005cccecda6a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-9bb959468-v58pb_calico-apiserver(f4c646c7-47f1-433d-b7c4-005cccecda6a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"53a704a189ac4b770b0945b53b987805759c31580d495cdb1c9698bc5455014a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-9bb959468-v58pb" podUID="f4c646c7-47f1-433d-b7c4-005cccecda6a" Dec 12 18:19:58.416525 containerd[1616]: time="2025-12-12T18:19:58.416330130Z" level=error msg="Failed to destroy network for sandbox \"8f3971811432cab33c7a282e1785ecc0e8aaa682382d3f31c5eaf8eb8f555c7b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:19:58.428306 containerd[1616]: time="2025-12-12T18:19:58.428234619Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bhjvt,Uid:516ed3cc-2563-4682-9bb4-937befb1cd30,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f3971811432cab33c7a282e1785ecc0e8aaa682382d3f31c5eaf8eb8f555c7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:19:58.428847 kubelet[2819]: E1212 18:19:58.428731 2819 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f3971811432cab33c7a282e1785ecc0e8aaa682382d3f31c5eaf8eb8f555c7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:19:58.428847 kubelet[2819]: E1212 18:19:58.428812 2819 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f3971811432cab33c7a282e1785ecc0e8aaa682382d3f31c5eaf8eb8f555c7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-bhjvt" Dec 12 18:19:58.428847 kubelet[2819]: E1212 18:19:58.428840 2819 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f3971811432cab33c7a282e1785ecc0e8aaa682382d3f31c5eaf8eb8f555c7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-bhjvt" Dec 12 18:19:58.429006 kubelet[2819]: E1212 18:19:58.428901 2819 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-bhjvt_kube-system(516ed3cc-2563-4682-9bb4-937befb1cd30)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-bhjvt_kube-system(516ed3cc-2563-4682-9bb4-937befb1cd30)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8f3971811432cab33c7a282e1785ecc0e8aaa682382d3f31c5eaf8eb8f555c7b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-bhjvt" podUID="516ed3cc-2563-4682-9bb4-937befb1cd30" Dec 12 18:19:58.437968 containerd[1616]: time="2025-12-12T18:19:58.437740796Z" level=error msg="Failed to destroy network for sandbox \"2620a73964612a51fc3ef1a175802a5306afe1a08a05e89c46486f31e5608a29\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:19:58.445413 containerd[1616]: time="2025-12-12T18:19:58.445180380Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9bb959468-57r44,Uid:ab0029a5-8491-42f1-b060-fef0c0422b49,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2620a73964612a51fc3ef1a175802a5306afe1a08a05e89c46486f31e5608a29\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:19:58.447506 kubelet[2819]: E1212 18:19:58.447250 2819 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2620a73964612a51fc3ef1a175802a5306afe1a08a05e89c46486f31e5608a29\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:19:58.447506 kubelet[2819]: E1212 18:19:58.447324 2819 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2620a73964612a51fc3ef1a175802a5306afe1a08a05e89c46486f31e5608a29\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9bb959468-57r44" Dec 12 18:19:58.447506 kubelet[2819]: E1212 18:19:58.447351 2819 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2620a73964612a51fc3ef1a175802a5306afe1a08a05e89c46486f31e5608a29\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9bb959468-57r44" Dec 12 18:19:58.448207 containerd[1616]: time="2025-12-12T18:19:58.447348875Z" level=error msg="Failed to destroy network for sandbox \"438fe10da6b291e704b4258608c4532b90936a8128e1aa8c18ecab0b08d34528\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:19:58.448542 kubelet[2819]: E1212 18:19:58.447416 2819 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-9bb959468-57r44_calico-apiserver(ab0029a5-8491-42f1-b060-fef0c0422b49)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-9bb959468-57r44_calico-apiserver(ab0029a5-8491-42f1-b060-fef0c0422b49)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2620a73964612a51fc3ef1a175802a5306afe1a08a05e89c46486f31e5608a29\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-9bb959468-57r44" podUID="ab0029a5-8491-42f1-b060-fef0c0422b49" Dec 12 18:19:58.454135 containerd[1616]: time="2025-12-12T18:19:58.454069209Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-59db8d7565-4jn5z,Uid:942e71a6-e303-45bb-a6a7-005da5952aa7,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"438fe10da6b291e704b4258608c4532b90936a8128e1aa8c18ecab0b08d34528\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:19:58.455129 kubelet[2819]: E1212 18:19:58.455080 2819 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"438fe10da6b291e704b4258608c4532b90936a8128e1aa8c18ecab0b08d34528\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:19:58.455731 kubelet[2819]: E1212 18:19:58.455461 2819 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"438fe10da6b291e704b4258608c4532b90936a8128e1aa8c18ecab0b08d34528\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-59db8d7565-4jn5z" Dec 12 18:19:58.456120 kubelet[2819]: E1212 18:19:58.455855 2819 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"438fe10da6b291e704b4258608c4532b90936a8128e1aa8c18ecab0b08d34528\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-59db8d7565-4jn5z" Dec 12 18:19:58.456590 kubelet[2819]: E1212 18:19:58.456283 2819 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-59db8d7565-4jn5z_calico-system(942e71a6-e303-45bb-a6a7-005da5952aa7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-59db8d7565-4jn5z_calico-system(942e71a6-e303-45bb-a6a7-005da5952aa7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"438fe10da6b291e704b4258608c4532b90936a8128e1aa8c18ecab0b08d34528\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-59db8d7565-4jn5z" podUID="942e71a6-e303-45bb-a6a7-005da5952aa7" Dec 12 18:19:58.488403 containerd[1616]: time="2025-12-12T18:19:58.488349235Z" level=error msg="Failed to destroy network for sandbox \"e78d7a4e84447e05c9e670f0b495f208dd59839fe2bad1046b7a626d80b33f6e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:19:58.493064 containerd[1616]: time="2025-12-12T18:19:58.492988341Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5kbmx,Uid:3530bcd5-7985-42ba-8587-569180a87a41,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e78d7a4e84447e05c9e670f0b495f208dd59839fe2bad1046b7a626d80b33f6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:19:58.493647 kubelet[2819]: E1212 18:19:58.493571 2819 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e78d7a4e84447e05c9e670f0b495f208dd59839fe2bad1046b7a626d80b33f6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:19:58.493871 kubelet[2819]: E1212 18:19:58.493789 2819 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e78d7a4e84447e05c9e670f0b495f208dd59839fe2bad1046b7a626d80b33f6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5kbmx" Dec 12 18:19:58.493871 kubelet[2819]: E1212 18:19:58.493824 2819 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e78d7a4e84447e05c9e670f0b495f208dd59839fe2bad1046b7a626d80b33f6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5kbmx" Dec 12 18:19:58.494127 kubelet[2819]: E1212 18:19:58.493959 2819 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-5kbmx_calico-system(3530bcd5-7985-42ba-8587-569180a87a41)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-5kbmx_calico-system(3530bcd5-7985-42ba-8587-569180a87a41)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e78d7a4e84447e05c9e670f0b495f208dd59839fe2bad1046b7a626d80b33f6e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5kbmx" podUID="3530bcd5-7985-42ba-8587-569180a87a41" Dec 12 18:19:59.147884 systemd[1]: run-netns-cni\x2dfa1f557b\x2df3a9\x2d2564\x2d3d20\x2d54187d58cb5a.mount: Deactivated successfully. Dec 12 18:19:59.148045 systemd[1]: run-netns-cni\x2d5970dc70\x2d5935\x2d41a2\x2de5fe\x2d664d7b5695bc.mount: Deactivated successfully. Dec 12 18:19:59.148153 systemd[1]: run-netns-cni\x2dbeff54e9\x2d5d5a\x2d1ce0\x2d58b1\x2d63729e8e133b.mount: Deactivated successfully. Dec 12 18:19:59.148269 systemd[1]: run-netns-cni\x2df2b42877\x2d7213\x2d2536\x2dbd5e\x2d783889d3551c.mount: Deactivated successfully. Dec 12 18:19:59.148383 systemd[1]: run-netns-cni\x2dcf5a46aa\x2dc300\x2d1a01\x2d1b9d\x2def440df25931.mount: Deactivated successfully. Dec 12 18:19:59.148476 systemd[1]: run-netns-cni\x2de2930c89\x2db2cc\x2d6363\x2d3bef\x2dfa34854fa0ca.mount: Deactivated successfully. Dec 12 18:20:05.326323 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2726010987.mount: Deactivated successfully. Dec 12 18:20:05.601641 containerd[1616]: time="2025-12-12T18:20:05.550424471Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Dec 12 18:20:05.622349 containerd[1616]: time="2025-12-12T18:20:05.621745967Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:20:05.745736 containerd[1616]: time="2025-12-12T18:20:05.745669609Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:20:05.747123 containerd[1616]: time="2025-12-12T18:20:05.747049071Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 8.36131143s" Dec 12 18:20:05.747123 containerd[1616]: time="2025-12-12T18:20:05.747106900Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Dec 12 18:20:05.748027 containerd[1616]: time="2025-12-12T18:20:05.747986853Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:20:05.786354 containerd[1616]: time="2025-12-12T18:20:05.786287130Z" level=info msg="CreateContainer within sandbox \"6e0d51fadcfefda7127001d7b70ed9c060f1151732df558e439b45db3c574e52\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 12 18:20:05.933851 containerd[1616]: time="2025-12-12T18:20:05.933709276Z" level=info msg="Container a3571ee62a2f73dc9a3dea4b8a0c7ebde6d7e8b797f8c829adf6519e868cbaa4: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:20:05.935939 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3561882743.mount: Deactivated successfully. Dec 12 18:20:05.997589 containerd[1616]: time="2025-12-12T18:20:05.997449278Z" level=info msg="CreateContainer within sandbox \"6e0d51fadcfefda7127001d7b70ed9c060f1151732df558e439b45db3c574e52\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a3571ee62a2f73dc9a3dea4b8a0c7ebde6d7e8b797f8c829adf6519e868cbaa4\"" Dec 12 18:20:05.999301 containerd[1616]: time="2025-12-12T18:20:05.999184314Z" level=info msg="StartContainer for \"a3571ee62a2f73dc9a3dea4b8a0c7ebde6d7e8b797f8c829adf6519e868cbaa4\"" Dec 12 18:20:06.007265 containerd[1616]: time="2025-12-12T18:20:06.007101607Z" level=info msg="connecting to shim a3571ee62a2f73dc9a3dea4b8a0c7ebde6d7e8b797f8c829adf6519e868cbaa4" address="unix:///run/containerd/s/6b02c001b1f60939778e7b023163988be4136c6f39e8657a671a82e581b86053" protocol=ttrpc version=3 Dec 12 18:20:06.289287 systemd[1]: Started cri-containerd-a3571ee62a2f73dc9a3dea4b8a0c7ebde6d7e8b797f8c829adf6519e868cbaa4.scope - libcontainer container a3571ee62a2f73dc9a3dea4b8a0c7ebde6d7e8b797f8c829adf6519e868cbaa4. Dec 12 18:20:06.379000 audit: BPF prog-id=170 op=LOAD Dec 12 18:20:06.381413 kernel: kauditd_printk_skb: 6 callbacks suppressed Dec 12 18:20:06.381573 kernel: audit: type=1334 audit(1765563606.379:573): prog-id=170 op=LOAD Dec 12 18:20:06.379000 audit[3819]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0000d4488 a2=98 a3=0 items=0 ppid=3324 pid=3819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:06.379000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133353731656536326132663733646339613364656134623861306337 Dec 12 18:20:06.396520 kernel: audit: type=1300 audit(1765563606.379:573): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0000d4488 a2=98 a3=0 items=0 ppid=3324 pid=3819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:06.396684 kernel: audit: type=1327 audit(1765563606.379:573): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133353731656536326132663733646339613364656134623861306337 Dec 12 18:20:06.384000 audit: BPF prog-id=171 op=LOAD Dec 12 18:20:06.404555 kernel: audit: type=1334 audit(1765563606.384:574): prog-id=171 op=LOAD Dec 12 18:20:06.404711 kernel: audit: type=1300 audit(1765563606.384:574): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0000d4218 a2=98 a3=0 items=0 ppid=3324 pid=3819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:06.384000 audit[3819]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0000d4218 a2=98 a3=0 items=0 ppid=3324 pid=3819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:06.384000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133353731656536326132663733646339613364656134623861306337 Dec 12 18:20:06.422834 kernel: audit: type=1327 audit(1765563606.384:574): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133353731656536326132663733646339613364656134623861306337 Dec 12 18:20:06.422998 kernel: audit: type=1334 audit(1765563606.384:575): prog-id=171 op=UNLOAD Dec 12 18:20:06.384000 audit: BPF prog-id=171 op=UNLOAD Dec 12 18:20:06.427411 kernel: audit: type=1300 audit(1765563606.384:575): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3324 pid=3819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:06.384000 audit[3819]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3324 pid=3819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:06.384000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133353731656536326132663733646339613364656134623861306337 Dec 12 18:20:06.448695 kernel: audit: type=1327 audit(1765563606.384:575): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133353731656536326132663733646339613364656134623861306337 Dec 12 18:20:06.448847 kernel: audit: type=1334 audit(1765563606.384:576): prog-id=170 op=UNLOAD Dec 12 18:20:06.384000 audit: BPF prog-id=170 op=UNLOAD Dec 12 18:20:06.384000 audit[3819]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3324 pid=3819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:06.384000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133353731656536326132663733646339613364656134623861306337 Dec 12 18:20:06.384000 audit: BPF prog-id=172 op=LOAD Dec 12 18:20:06.384000 audit[3819]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0000d46e8 a2=98 a3=0 items=0 ppid=3324 pid=3819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:06.384000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133353731656536326132663733646339613364656134623861306337 Dec 12 18:20:06.506904 containerd[1616]: time="2025-12-12T18:20:06.506854207Z" level=info msg="StartContainer for \"a3571ee62a2f73dc9a3dea4b8a0c7ebde6d7e8b797f8c829adf6519e868cbaa4\" returns successfully" Dec 12 18:20:06.657395 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 12 18:20:06.657575 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 12 18:20:07.153430 kubelet[2819]: I1212 18:20:07.153116 2819 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/942e71a6-e303-45bb-a6a7-005da5952aa7-whisker-ca-bundle\") pod \"942e71a6-e303-45bb-a6a7-005da5952aa7\" (UID: \"942e71a6-e303-45bb-a6a7-005da5952aa7\") " Dec 12 18:20:07.155500 kubelet[2819]: I1212 18:20:07.155444 2819 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwlpl\" (UniqueName: \"kubernetes.io/projected/942e71a6-e303-45bb-a6a7-005da5952aa7-kube-api-access-lwlpl\") pod \"942e71a6-e303-45bb-a6a7-005da5952aa7\" (UID: \"942e71a6-e303-45bb-a6a7-005da5952aa7\") " Dec 12 18:20:07.156172 kubelet[2819]: I1212 18:20:07.155746 2819 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/942e71a6-e303-45bb-a6a7-005da5952aa7-whisker-backend-key-pair\") pod \"942e71a6-e303-45bb-a6a7-005da5952aa7\" (UID: \"942e71a6-e303-45bb-a6a7-005da5952aa7\") " Dec 12 18:20:07.156574 kubelet[2819]: I1212 18:20:07.155020 2819 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/942e71a6-e303-45bb-a6a7-005da5952aa7-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "942e71a6-e303-45bb-a6a7-005da5952aa7" (UID: "942e71a6-e303-45bb-a6a7-005da5952aa7"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 18:20:07.182137 kubelet[2819]: I1212 18:20:07.181912 2819 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/942e71a6-e303-45bb-a6a7-005da5952aa7-kube-api-access-lwlpl" (OuterVolumeSpecName: "kube-api-access-lwlpl") pod "942e71a6-e303-45bb-a6a7-005da5952aa7" (UID: "942e71a6-e303-45bb-a6a7-005da5952aa7"). InnerVolumeSpecName "kube-api-access-lwlpl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 18:20:07.185668 systemd[1]: var-lib-kubelet-pods-942e71a6\x2de303\x2d45bb\x2da6a7\x2d005da5952aa7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlwlpl.mount: Deactivated successfully. Dec 12 18:20:07.185982 systemd[1]: var-lib-kubelet-pods-942e71a6\x2de303\x2d45bb\x2da6a7\x2d005da5952aa7-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 12 18:20:07.191443 kubelet[2819]: I1212 18:20:07.190581 2819 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/942e71a6-e303-45bb-a6a7-005da5952aa7-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "942e71a6-e303-45bb-a6a7-005da5952aa7" (UID: "942e71a6-e303-45bb-a6a7-005da5952aa7"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 18:20:07.257876 kubelet[2819]: I1212 18:20:07.256797 2819 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/942e71a6-e303-45bb-a6a7-005da5952aa7-whisker-ca-bundle\") on node \"ci-4515.1.0-f-8be9c60ab1\" DevicePath \"\"" Dec 12 18:20:07.258162 kubelet[2819]: I1212 18:20:07.258099 2819 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lwlpl\" (UniqueName: \"kubernetes.io/projected/942e71a6-e303-45bb-a6a7-005da5952aa7-kube-api-access-lwlpl\") on node \"ci-4515.1.0-f-8be9c60ab1\" DevicePath \"\"" Dec 12 18:20:07.258162 kubelet[2819]: I1212 18:20:07.258127 2819 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/942e71a6-e303-45bb-a6a7-005da5952aa7-whisker-backend-key-pair\") on node \"ci-4515.1.0-f-8be9c60ab1\" DevicePath \"\"" Dec 12 18:20:07.483337 kubelet[2819]: E1212 18:20:07.483167 2819 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 12 18:20:07.492794 systemd[1]: Removed slice kubepods-besteffort-pod942e71a6_e303_45bb_a6a7_005da5952aa7.slice - libcontainer container kubepods-besteffort-pod942e71a6_e303_45bb_a6a7_005da5952aa7.slice. Dec 12 18:20:07.526838 kubelet[2819]: I1212 18:20:07.523294 2819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-nnwcp" podStartSLOduration=3.218877898 podStartE2EDuration="21.523272115s" podCreationTimestamp="2025-12-12 18:19:46 +0000 UTC" firstStartedPulling="2025-12-12 18:19:47.44398199 +0000 UTC m=+25.555949646" lastFinishedPulling="2025-12-12 18:20:05.748376204 +0000 UTC m=+43.860343863" observedRunningTime="2025-12-12 18:20:07.522422144 +0000 UTC m=+45.634389822" watchObservedRunningTime="2025-12-12 18:20:07.523272115 +0000 UTC m=+45.635239779" Dec 12 18:20:07.710905 systemd[1]: Created slice kubepods-besteffort-podf2c6d001_1096_4786_820b_c2f7a945bcac.slice - libcontainer container kubepods-besteffort-podf2c6d001_1096_4786_820b_c2f7a945bcac.slice. Dec 12 18:20:07.761723 kubelet[2819]: I1212 18:20:07.761165 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f2c6d001-1096-4786-820b-c2f7a945bcac-whisker-backend-key-pair\") pod \"whisker-7565c6cc-lrgtt\" (UID: \"f2c6d001-1096-4786-820b-c2f7a945bcac\") " pod="calico-system/whisker-7565c6cc-lrgtt" Dec 12 18:20:07.762034 kubelet[2819]: I1212 18:20:07.762007 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f2c6d001-1096-4786-820b-c2f7a945bcac-whisker-ca-bundle\") pod \"whisker-7565c6cc-lrgtt\" (UID: \"f2c6d001-1096-4786-820b-c2f7a945bcac\") " pod="calico-system/whisker-7565c6cc-lrgtt" Dec 12 18:20:07.762651 kubelet[2819]: I1212 18:20:07.762606 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhrn4\" (UniqueName: \"kubernetes.io/projected/f2c6d001-1096-4786-820b-c2f7a945bcac-kube-api-access-fhrn4\") pod \"whisker-7565c6cc-lrgtt\" (UID: \"f2c6d001-1096-4786-820b-c2f7a945bcac\") " pod="calico-system/whisker-7565c6cc-lrgtt" Dec 12 18:20:08.021214 containerd[1616]: time="2025-12-12T18:20:08.021024061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7565c6cc-lrgtt,Uid:f2c6d001-1096-4786-820b-c2f7a945bcac,Namespace:calico-system,Attempt:0,}" Dec 12 18:20:08.130538 kubelet[2819]: I1212 18:20:08.130466 2819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="942e71a6-e303-45bb-a6a7-005da5952aa7" path="/var/lib/kubelet/pods/942e71a6-e303-45bb-a6a7-005da5952aa7/volumes" Dec 12 18:20:08.485587 systemd-networkd[1510]: cali34b13c9af94: Link UP Dec 12 18:20:08.488774 systemd-networkd[1510]: cali34b13c9af94: Gained carrier Dec 12 18:20:08.491942 kubelet[2819]: E1212 18:20:08.491832 2819 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 12 18:20:08.583700 containerd[1616]: 2025-12-12 18:20:08.086 [INFO][3912] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 18:20:08.583700 containerd[1616]: 2025-12-12 18:20:08.135 [INFO][3912] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--f--8be9c60ab1-k8s-whisker--7565c6cc--lrgtt-eth0 whisker-7565c6cc- calico-system f2c6d001-1096-4786-820b-c2f7a945bcac 986 0 2025-12-12 18:20:07 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7565c6cc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4515.1.0-f-8be9c60ab1 whisker-7565c6cc-lrgtt eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali34b13c9af94 [] [] }} ContainerID="4539ed3f0a9e7df19a00a1525fd0478d4dec08ad81778aa0b1440f1d99b42bff" Namespace="calico-system" Pod="whisker-7565c6cc-lrgtt" WorkloadEndpoint="ci--4515.1.0--f--8be9c60ab1-k8s-whisker--7565c6cc--lrgtt-" Dec 12 18:20:08.583700 containerd[1616]: 2025-12-12 18:20:08.136 [INFO][3912] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4539ed3f0a9e7df19a00a1525fd0478d4dec08ad81778aa0b1440f1d99b42bff" Namespace="calico-system" Pod="whisker-7565c6cc-lrgtt" WorkloadEndpoint="ci--4515.1.0--f--8be9c60ab1-k8s-whisker--7565c6cc--lrgtt-eth0" Dec 12 18:20:08.583700 containerd[1616]: 2025-12-12 18:20:08.363 [INFO][3923] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4539ed3f0a9e7df19a00a1525fd0478d4dec08ad81778aa0b1440f1d99b42bff" HandleID="k8s-pod-network.4539ed3f0a9e7df19a00a1525fd0478d4dec08ad81778aa0b1440f1d99b42bff" Workload="ci--4515.1.0--f--8be9c60ab1-k8s-whisker--7565c6cc--lrgtt-eth0" Dec 12 18:20:08.584257 containerd[1616]: 2025-12-12 18:20:08.368 [INFO][3923] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4539ed3f0a9e7df19a00a1525fd0478d4dec08ad81778aa0b1440f1d99b42bff" HandleID="k8s-pod-network.4539ed3f0a9e7df19a00a1525fd0478d4dec08ad81778aa0b1440f1d99b42bff" Workload="ci--4515.1.0--f--8be9c60ab1-k8s-whisker--7565c6cc--lrgtt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005fe4e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515.1.0-f-8be9c60ab1", "pod":"whisker-7565c6cc-lrgtt", "timestamp":"2025-12-12 18:20:08.363918047 +0000 UTC"}, Hostname:"ci-4515.1.0-f-8be9c60ab1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:20:08.584257 containerd[1616]: 2025-12-12 18:20:08.368 [INFO][3923] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:20:08.584257 containerd[1616]: 2025-12-12 18:20:08.369 [INFO][3923] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:20:08.584257 containerd[1616]: 2025-12-12 18:20:08.369 [INFO][3923] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-f-8be9c60ab1' Dec 12 18:20:08.584257 containerd[1616]: 2025-12-12 18:20:08.398 [INFO][3923] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4539ed3f0a9e7df19a00a1525fd0478d4dec08ad81778aa0b1440f1d99b42bff" host="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:20:08.584257 containerd[1616]: 2025-12-12 18:20:08.412 [INFO][3923] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:20:08.584257 containerd[1616]: 2025-12-12 18:20:08.424 [INFO][3923] ipam/ipam.go 511: Trying affinity for 192.168.101.128/26 host="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:20:08.584257 containerd[1616]: 2025-12-12 18:20:08.427 [INFO][3923] ipam/ipam.go 158: Attempting to load block cidr=192.168.101.128/26 host="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:20:08.584257 containerd[1616]: 2025-12-12 18:20:08.432 [INFO][3923] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.101.128/26 host="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:20:08.584819 containerd[1616]: 2025-12-12 18:20:08.432 [INFO][3923] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.101.128/26 handle="k8s-pod-network.4539ed3f0a9e7df19a00a1525fd0478d4dec08ad81778aa0b1440f1d99b42bff" host="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:20:08.584819 containerd[1616]: 2025-12-12 18:20:08.435 [INFO][3923] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4539ed3f0a9e7df19a00a1525fd0478d4dec08ad81778aa0b1440f1d99b42bff Dec 12 18:20:08.584819 containerd[1616]: 2025-12-12 18:20:08.446 [INFO][3923] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.101.128/26 handle="k8s-pod-network.4539ed3f0a9e7df19a00a1525fd0478d4dec08ad81778aa0b1440f1d99b42bff" host="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:20:08.584819 containerd[1616]: 2025-12-12 18:20:08.454 [INFO][3923] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.101.129/26] block=192.168.101.128/26 handle="k8s-pod-network.4539ed3f0a9e7df19a00a1525fd0478d4dec08ad81778aa0b1440f1d99b42bff" host="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:20:08.584819 containerd[1616]: 2025-12-12 18:20:08.455 [INFO][3923] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.101.129/26] handle="k8s-pod-network.4539ed3f0a9e7df19a00a1525fd0478d4dec08ad81778aa0b1440f1d99b42bff" host="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:20:08.584819 containerd[1616]: 2025-12-12 18:20:08.455 [INFO][3923] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:20:08.584819 containerd[1616]: 2025-12-12 18:20:08.455 [INFO][3923] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.101.129/26] IPv6=[] ContainerID="4539ed3f0a9e7df19a00a1525fd0478d4dec08ad81778aa0b1440f1d99b42bff" HandleID="k8s-pod-network.4539ed3f0a9e7df19a00a1525fd0478d4dec08ad81778aa0b1440f1d99b42bff" Workload="ci--4515.1.0--f--8be9c60ab1-k8s-whisker--7565c6cc--lrgtt-eth0" Dec 12 18:20:08.585246 containerd[1616]: 2025-12-12 18:20:08.460 [INFO][3912] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4539ed3f0a9e7df19a00a1525fd0478d4dec08ad81778aa0b1440f1d99b42bff" Namespace="calico-system" Pod="whisker-7565c6cc-lrgtt" WorkloadEndpoint="ci--4515.1.0--f--8be9c60ab1-k8s-whisker--7565c6cc--lrgtt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--f--8be9c60ab1-k8s-whisker--7565c6cc--lrgtt-eth0", GenerateName:"whisker-7565c6cc-", Namespace:"calico-system", SelfLink:"", UID:"f2c6d001-1096-4786-820b-c2f7a945bcac", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 20, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7565c6cc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-f-8be9c60ab1", ContainerID:"", Pod:"whisker-7565c6cc-lrgtt", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.101.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali34b13c9af94", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:20:08.585246 containerd[1616]: 2025-12-12 18:20:08.460 [INFO][3912] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.101.129/32] ContainerID="4539ed3f0a9e7df19a00a1525fd0478d4dec08ad81778aa0b1440f1d99b42bff" Namespace="calico-system" Pod="whisker-7565c6cc-lrgtt" WorkloadEndpoint="ci--4515.1.0--f--8be9c60ab1-k8s-whisker--7565c6cc--lrgtt-eth0" Dec 12 18:20:08.585460 containerd[1616]: 2025-12-12 18:20:08.460 [INFO][3912] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali34b13c9af94 ContainerID="4539ed3f0a9e7df19a00a1525fd0478d4dec08ad81778aa0b1440f1d99b42bff" Namespace="calico-system" Pod="whisker-7565c6cc-lrgtt" WorkloadEndpoint="ci--4515.1.0--f--8be9c60ab1-k8s-whisker--7565c6cc--lrgtt-eth0" Dec 12 18:20:08.585460 containerd[1616]: 2025-12-12 18:20:08.492 [INFO][3912] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4539ed3f0a9e7df19a00a1525fd0478d4dec08ad81778aa0b1440f1d99b42bff" Namespace="calico-system" Pod="whisker-7565c6cc-lrgtt" WorkloadEndpoint="ci--4515.1.0--f--8be9c60ab1-k8s-whisker--7565c6cc--lrgtt-eth0" Dec 12 18:20:08.585793 containerd[1616]: 2025-12-12 18:20:08.493 [INFO][3912] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4539ed3f0a9e7df19a00a1525fd0478d4dec08ad81778aa0b1440f1d99b42bff" Namespace="calico-system" Pod="whisker-7565c6cc-lrgtt" WorkloadEndpoint="ci--4515.1.0--f--8be9c60ab1-k8s-whisker--7565c6cc--lrgtt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--f--8be9c60ab1-k8s-whisker--7565c6cc--lrgtt-eth0", GenerateName:"whisker-7565c6cc-", Namespace:"calico-system", SelfLink:"", UID:"f2c6d001-1096-4786-820b-c2f7a945bcac", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 20, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7565c6cc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-f-8be9c60ab1", ContainerID:"4539ed3f0a9e7df19a00a1525fd0478d4dec08ad81778aa0b1440f1d99b42bff", Pod:"whisker-7565c6cc-lrgtt", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.101.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali34b13c9af94", MAC:"76:fa:72:3d:01:e0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:20:08.585920 containerd[1616]: 2025-12-12 18:20:08.556 [INFO][3912] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4539ed3f0a9e7df19a00a1525fd0478d4dec08ad81778aa0b1440f1d99b42bff" Namespace="calico-system" Pod="whisker-7565c6cc-lrgtt" WorkloadEndpoint="ci--4515.1.0--f--8be9c60ab1-k8s-whisker--7565c6cc--lrgtt-eth0" Dec 12 18:20:08.964640 containerd[1616]: time="2025-12-12T18:20:08.964456174Z" level=info msg="connecting to shim 4539ed3f0a9e7df19a00a1525fd0478d4dec08ad81778aa0b1440f1d99b42bff" address="unix:///run/containerd/s/a5b5a5b9301bab09753df5add050d481935cfdfb031f27a05b54ec5140a07821" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:20:09.039903 systemd[1]: Started cri-containerd-4539ed3f0a9e7df19a00a1525fd0478d4dec08ad81778aa0b1440f1d99b42bff.scope - libcontainer container 4539ed3f0a9e7df19a00a1525fd0478d4dec08ad81778aa0b1440f1d99b42bff. Dec 12 18:20:09.070000 audit: BPF prog-id=173 op=LOAD Dec 12 18:20:09.071000 audit: BPF prog-id=174 op=LOAD Dec 12 18:20:09.071000 audit[4064]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=4053 pid=4064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:09.071000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435333965643366306139653764663139613030613135323566643034 Dec 12 18:20:09.071000 audit: BPF prog-id=174 op=UNLOAD Dec 12 18:20:09.071000 audit[4064]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4053 pid=4064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:09.071000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435333965643366306139653764663139613030613135323566643034 Dec 12 18:20:09.072000 audit: BPF prog-id=175 op=LOAD Dec 12 18:20:09.072000 audit[4064]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=4053 pid=4064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:09.072000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435333965643366306139653764663139613030613135323566643034 Dec 12 18:20:09.072000 audit: BPF prog-id=176 op=LOAD Dec 12 18:20:09.072000 audit[4064]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=4053 pid=4064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:09.072000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435333965643366306139653764663139613030613135323566643034 Dec 12 18:20:09.072000 audit: BPF prog-id=176 op=UNLOAD Dec 12 18:20:09.072000 audit[4064]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4053 pid=4064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:09.072000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435333965643366306139653764663139613030613135323566643034 Dec 12 18:20:09.073000 audit: BPF prog-id=175 op=UNLOAD Dec 12 18:20:09.073000 audit[4064]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4053 pid=4064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:09.073000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435333965643366306139653764663139613030613135323566643034 Dec 12 18:20:09.073000 audit: BPF prog-id=177 op=LOAD Dec 12 18:20:09.073000 audit[4064]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=4053 pid=4064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:09.073000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435333965643366306139653764663139613030613135323566643034 Dec 12 18:20:09.194458 containerd[1616]: time="2025-12-12T18:20:09.194305934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7565c6cc-lrgtt,Uid:f2c6d001-1096-4786-820b-c2f7a945bcac,Namespace:calico-system,Attempt:0,} returns sandbox id \"4539ed3f0a9e7df19a00a1525fd0478d4dec08ad81778aa0b1440f1d99b42bff\"" Dec 12 18:20:09.200352 containerd[1616]: time="2025-12-12T18:20:09.199671672Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 18:20:09.495222 kubelet[2819]: E1212 18:20:09.495167 2819 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 12 18:20:09.544000 audit: BPF prog-id=178 op=LOAD Dec 12 18:20:09.544000 audit[4146]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd7be818d0 a2=98 a3=1fffffffffffffff items=0 ppid=3949 pid=4146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:09.544000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 18:20:09.545000 audit: BPF prog-id=178 op=UNLOAD Dec 12 18:20:09.545000 audit[4146]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd7be818a0 a3=0 items=0 ppid=3949 pid=4146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:09.545000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 18:20:09.545000 audit: BPF prog-id=179 op=LOAD Dec 12 18:20:09.545000 audit[4146]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd7be817b0 a2=94 a3=3 items=0 ppid=3949 pid=4146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:09.545000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 18:20:09.545000 audit: BPF prog-id=179 op=UNLOAD Dec 12 18:20:09.545000 audit[4146]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd7be817b0 a2=94 a3=3 items=0 ppid=3949 pid=4146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:09.545000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 18:20:09.545000 audit: BPF prog-id=180 op=LOAD Dec 12 18:20:09.545000 audit[4146]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd7be817f0 a2=94 a3=7ffd7be819d0 items=0 ppid=3949 pid=4146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:09.545000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 18:20:09.545000 audit: BPF prog-id=180 op=UNLOAD Dec 12 18:20:09.545000 audit[4146]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd7be817f0 a2=94 a3=7ffd7be819d0 items=0 ppid=3949 pid=4146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:09.545000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 18:20:09.550000 audit: BPF prog-id=181 op=LOAD Dec 12 18:20:09.550000 audit[4147]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcf6381780 a2=98 a3=3 items=0 ppid=3949 pid=4147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:09.550000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:20:09.551000 audit: BPF prog-id=181 op=UNLOAD Dec 12 18:20:09.551000 audit[4147]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffcf6381750 a3=0 items=0 ppid=3949 pid=4147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:09.551000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:20:09.551000 audit: BPF prog-id=182 op=LOAD Dec 12 18:20:09.551000 audit[4147]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffcf6381570 a2=94 a3=54428f items=0 ppid=3949 pid=4147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:09.551000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:20:09.551000 audit: BPF prog-id=182 op=UNLOAD Dec 12 18:20:09.551000 audit[4147]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffcf6381570 a2=94 a3=54428f items=0 ppid=3949 pid=4147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:09.551000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:20:09.552000 audit: BPF prog-id=183 op=LOAD Dec 12 18:20:09.552000 audit[4147]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffcf63815a0 a2=94 a3=2 items=0 ppid=3949 pid=4147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:09.552000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:20:09.552000 audit: BPF prog-id=183 op=UNLOAD Dec 12 18:20:09.552000 audit[4147]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffcf63815a0 a2=0 a3=2 items=0 ppid=3949 pid=4147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:09.552000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:20:09.593762 containerd[1616]: time="2025-12-12T18:20:09.593673578Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:20:09.607061 containerd[1616]: time="2025-12-12T18:20:09.596297347Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 18:20:09.614324 containerd[1616]: time="2025-12-12T18:20:09.599156304Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 12 18:20:09.634881 kubelet[2819]: E1212 18:20:09.634822 2819 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:20:09.635076 kubelet[2819]: E1212 18:20:09.634899 2819 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:20:09.637977 kubelet[2819]: E1212 18:20:09.637894 2819 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:4fe5f617f54d4643bcb5bae7103038b0,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fhrn4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7565c6cc-lrgtt_calico-system(f2c6d001-1096-4786-820b-c2f7a945bcac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 18:20:09.640701 containerd[1616]: time="2025-12-12T18:20:09.640650653Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 18:20:09.936000 audit: BPF prog-id=184 op=LOAD Dec 12 18:20:09.936000 audit[4147]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffcf6381460 a2=94 a3=1 items=0 ppid=3949 pid=4147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:09.936000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:20:09.936000 audit: BPF prog-id=184 op=UNLOAD Dec 12 18:20:09.936000 audit[4147]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffcf6381460 a2=94 a3=1 items=0 ppid=3949 pid=4147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:09.936000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:20:09.958000 audit: BPF prog-id=185 op=LOAD Dec 12 18:20:09.958000 audit[4147]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffcf6381450 a2=94 a3=4 items=0 ppid=3949 pid=4147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:09.958000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:20:09.958000 audit: BPF prog-id=185 op=UNLOAD Dec 12 18:20:09.958000 audit[4147]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffcf6381450 a2=0 a3=4 items=0 ppid=3949 pid=4147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:09.958000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:20:09.959000 audit: BPF prog-id=186 op=LOAD Dec 12 18:20:09.959000 audit[4147]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffcf63812b0 a2=94 a3=5 items=0 ppid=3949 pid=4147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:09.959000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:20:09.959000 audit: BPF prog-id=186 op=UNLOAD Dec 12 18:20:09.959000 audit[4147]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffcf63812b0 a2=0 a3=5 items=0 ppid=3949 pid=4147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:09.959000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:20:09.959000 audit: BPF prog-id=187 op=LOAD Dec 12 18:20:09.959000 audit[4147]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffcf63814d0 a2=94 a3=6 items=0 ppid=3949 pid=4147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:09.959000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:20:09.959000 audit: BPF prog-id=187 op=UNLOAD Dec 12 18:20:09.959000 audit[4147]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffcf63814d0 a2=0 a3=6 items=0 ppid=3949 pid=4147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:09.959000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:20:09.960000 audit: BPF prog-id=188 op=LOAD Dec 12 18:20:09.960000 audit[4147]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffcf6380c80 a2=94 a3=88 items=0 ppid=3949 pid=4147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:09.960000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:20:09.960000 audit: BPF prog-id=189 op=LOAD Dec 12 18:20:09.960000 audit[4147]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffcf6380b00 a2=94 a3=2 items=0 ppid=3949 pid=4147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:09.960000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:20:09.960000 audit: BPF prog-id=189 op=UNLOAD Dec 12 18:20:09.960000 audit[4147]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffcf6380b30 a2=0 a3=7ffcf6380c30 items=0 ppid=3949 pid=4147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:09.960000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:20:09.961000 audit: BPF prog-id=188 op=UNLOAD Dec 12 18:20:09.961000 audit[4147]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=18b64d10 a2=0 a3=825ac5cdf1546e6d items=0 ppid=3949 pid=4147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:09.961000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:20:09.978000 audit: BPF prog-id=190 op=LOAD Dec 12 18:20:09.978000 audit[4156]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff90fa6f70 a2=98 a3=1999999999999999 items=0 ppid=3949 pid=4156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:09.978000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 18:20:09.978000 audit: BPF prog-id=190 op=UNLOAD Dec 12 18:20:09.978000 audit[4156]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff90fa6f40 a3=0 items=0 ppid=3949 pid=4156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:09.978000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 18:20:09.978000 audit: BPF prog-id=191 op=LOAD Dec 12 18:20:09.978000 audit[4156]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff90fa6e50 a2=94 a3=ffff items=0 ppid=3949 pid=4156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:09.978000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 18:20:09.978000 audit: BPF prog-id=191 op=UNLOAD Dec 12 18:20:09.978000 audit[4156]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff90fa6e50 a2=94 a3=ffff items=0 ppid=3949 pid=4156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:09.978000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 18:20:09.978000 audit: BPF prog-id=192 op=LOAD Dec 12 18:20:09.978000 audit[4156]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff90fa6e90 a2=94 a3=7fff90fa7070 items=0 ppid=3949 pid=4156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:09.978000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 18:20:09.978000 audit: BPF prog-id=192 op=UNLOAD Dec 12 18:20:09.978000 audit[4156]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff90fa6e90 a2=94 a3=7fff90fa7070 items=0 ppid=3949 pid=4156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:09.978000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 18:20:09.993963 containerd[1616]: time="2025-12-12T18:20:09.993895899Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:20:09.996523 containerd[1616]: time="2025-12-12T18:20:09.995449568Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 18:20:09.997024 containerd[1616]: time="2025-12-12T18:20:09.995524600Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 12 18:20:09.997113 kubelet[2819]: E1212 18:20:09.996776 2819 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:20:09.997113 kubelet[2819]: E1212 18:20:09.996846 2819 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:20:10.000177 kubelet[2819]: E1212 18:20:09.999709 2819 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fhrn4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7565c6cc-lrgtt_calico-system(f2c6d001-1096-4786-820b-c2f7a945bcac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 18:20:10.002506 kubelet[2819]: E1212 18:20:10.002363 2819 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7565c6cc-lrgtt" podUID="f2c6d001-1096-4786-820b-c2f7a945bcac" Dec 12 18:20:10.103940 systemd-networkd[1510]: vxlan.calico: Link UP Dec 12 18:20:10.103955 systemd-networkd[1510]: vxlan.calico: Gained carrier Dec 12 18:20:10.128257 kubelet[2819]: E1212 18:20:10.128180 2819 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 12 18:20:10.132569 containerd[1616]: time="2025-12-12T18:20:10.132375105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bhjvt,Uid:516ed3cc-2563-4682-9bb4-937befb1cd30,Namespace:kube-system,Attempt:0,}" Dec 12 18:20:10.189000 audit: BPF prog-id=193 op=LOAD Dec 12 18:20:10.189000 audit[4190]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc21fa2620 a2=98 a3=0 items=0 ppid=3949 pid=4190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:10.189000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 18:20:10.189000 audit: BPF prog-id=193 op=UNLOAD Dec 12 18:20:10.189000 audit[4190]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffc21fa25f0 a3=0 items=0 ppid=3949 pid=4190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:10.189000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 18:20:10.189000 audit: BPF prog-id=194 op=LOAD Dec 12 18:20:10.189000 audit[4190]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc21fa2430 a2=94 a3=54428f items=0 ppid=3949 pid=4190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:10.189000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 18:20:10.190000 audit: BPF prog-id=194 op=UNLOAD Dec 12 18:20:10.190000 audit[4190]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc21fa2430 a2=94 a3=54428f items=0 ppid=3949 pid=4190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:10.190000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 18:20:10.190000 audit: BPF prog-id=195 op=LOAD Dec 12 18:20:10.190000 audit[4190]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc21fa2460 a2=94 a3=2 items=0 ppid=3949 pid=4190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:10.190000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 18:20:10.190000 audit: BPF prog-id=195 op=UNLOAD Dec 12 18:20:10.190000 audit[4190]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc21fa2460 a2=0 a3=2 items=0 ppid=3949 pid=4190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:10.190000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 18:20:10.191000 audit: BPF prog-id=196 op=LOAD Dec 12 18:20:10.191000 audit[4190]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc21fa2210 a2=94 a3=4 items=0 ppid=3949 pid=4190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:10.191000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 18:20:10.191000 audit: BPF prog-id=196 op=UNLOAD Dec 12 18:20:10.191000 audit[4190]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc21fa2210 a2=94 a3=4 items=0 ppid=3949 pid=4190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:10.191000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 18:20:10.191000 audit: BPF prog-id=197 op=LOAD Dec 12 18:20:10.191000 audit[4190]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc21fa2310 a2=94 a3=7ffc21fa2490 items=0 ppid=3949 pid=4190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:10.191000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 18:20:10.191000 audit: BPF prog-id=197 op=UNLOAD Dec 12 18:20:10.191000 audit[4190]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc21fa2310 a2=0 a3=7ffc21fa2490 items=0 ppid=3949 pid=4190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:10.191000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 18:20:10.193000 audit: BPF prog-id=198 op=LOAD Dec 12 18:20:10.193000 audit[4190]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc21fa1a40 a2=94 a3=2 items=0 ppid=3949 pid=4190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:10.193000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 18:20:10.193000 audit: BPF prog-id=198 op=UNLOAD Dec 12 18:20:10.193000 audit[4190]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc21fa1a40 a2=0 a3=2 items=0 ppid=3949 pid=4190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:10.193000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 18:20:10.193000 audit: BPF prog-id=199 op=LOAD Dec 12 18:20:10.193000 audit[4190]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc21fa1b40 a2=94 a3=30 items=0 ppid=3949 pid=4190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:10.193000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 18:20:10.203000 audit: BPF prog-id=200 op=LOAD Dec 12 18:20:10.203000 audit[4194]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff6b9a6ba0 a2=98 a3=0 items=0 ppid=3949 pid=4194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:10.203000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:20:10.203000 audit: BPF prog-id=200 op=UNLOAD Dec 12 18:20:10.203000 audit[4194]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff6b9a6b70 a3=0 items=0 ppid=3949 pid=4194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:10.203000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:20:10.203000 audit: BPF prog-id=201 op=LOAD Dec 12 18:20:10.203000 audit[4194]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff6b9a6990 a2=94 a3=54428f items=0 ppid=3949 pid=4194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:10.203000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:20:10.203000 audit: BPF prog-id=201 op=UNLOAD Dec 12 18:20:10.203000 audit[4194]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff6b9a6990 a2=94 a3=54428f items=0 ppid=3949 pid=4194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:10.203000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:20:10.203000 audit: BPF prog-id=202 op=LOAD Dec 12 18:20:10.203000 audit[4194]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff6b9a69c0 a2=94 a3=2 items=0 ppid=3949 pid=4194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:10.203000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:20:10.203000 audit: BPF prog-id=202 op=UNLOAD Dec 12 18:20:10.203000 audit[4194]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff6b9a69c0 a2=0 a3=2 items=0 ppid=3949 pid=4194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:10.203000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:20:10.361615 systemd-networkd[1510]: cali34b13c9af94: Gained IPv6LL Dec 12 18:20:10.396247 systemd-networkd[1510]: cali60c8507e1d2: Link UP Dec 12 18:20:10.399218 systemd-networkd[1510]: cali60c8507e1d2: Gained carrier Dec 12 18:20:10.433790 containerd[1616]: 2025-12-12 18:20:10.263 [INFO][4182] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--f--8be9c60ab1-k8s-coredns--668d6bf9bc--bhjvt-eth0 coredns-668d6bf9bc- kube-system 516ed3cc-2563-4682-9bb4-937befb1cd30 913 0 2025-12-12 18:19:28 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4515.1.0-f-8be9c60ab1 coredns-668d6bf9bc-bhjvt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali60c8507e1d2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="210543e581f67d59e33800361c6981ccf2374bea99b58249670a7363274da8e3" Namespace="kube-system" Pod="coredns-668d6bf9bc-bhjvt" WorkloadEndpoint="ci--4515.1.0--f--8be9c60ab1-k8s-coredns--668d6bf9bc--bhjvt-" Dec 12 18:20:10.433790 containerd[1616]: 2025-12-12 18:20:10.263 [INFO][4182] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="210543e581f67d59e33800361c6981ccf2374bea99b58249670a7363274da8e3" Namespace="kube-system" Pod="coredns-668d6bf9bc-bhjvt" WorkloadEndpoint="ci--4515.1.0--f--8be9c60ab1-k8s-coredns--668d6bf9bc--bhjvt-eth0" Dec 12 18:20:10.433790 containerd[1616]: 2025-12-12 18:20:10.318 [INFO][4201] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="210543e581f67d59e33800361c6981ccf2374bea99b58249670a7363274da8e3" HandleID="k8s-pod-network.210543e581f67d59e33800361c6981ccf2374bea99b58249670a7363274da8e3" Workload="ci--4515.1.0--f--8be9c60ab1-k8s-coredns--668d6bf9bc--bhjvt-eth0" Dec 12 18:20:10.434441 containerd[1616]: 2025-12-12 18:20:10.318 [INFO][4201] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="210543e581f67d59e33800361c6981ccf2374bea99b58249670a7363274da8e3" HandleID="k8s-pod-network.210543e581f67d59e33800361c6981ccf2374bea99b58249670a7363274da8e3" Workload="ci--4515.1.0--f--8be9c60ab1-k8s-coredns--668d6bf9bc--bhjvt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032d3b0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4515.1.0-f-8be9c60ab1", "pod":"coredns-668d6bf9bc-bhjvt", "timestamp":"2025-12-12 18:20:10.318346381 +0000 UTC"}, Hostname:"ci-4515.1.0-f-8be9c60ab1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:20:10.434441 containerd[1616]: 2025-12-12 18:20:10.318 [INFO][4201] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:20:10.434441 containerd[1616]: 2025-12-12 18:20:10.318 [INFO][4201] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:20:10.434441 containerd[1616]: 2025-12-12 18:20:10.318 [INFO][4201] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-f-8be9c60ab1' Dec 12 18:20:10.434441 containerd[1616]: 2025-12-12 18:20:10.327 [INFO][4201] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.210543e581f67d59e33800361c6981ccf2374bea99b58249670a7363274da8e3" host="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:20:10.434441 containerd[1616]: 2025-12-12 18:20:10.339 [INFO][4201] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:20:10.434441 containerd[1616]: 2025-12-12 18:20:10.350 [INFO][4201] ipam/ipam.go 511: Trying affinity for 192.168.101.128/26 host="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:20:10.434441 containerd[1616]: 2025-12-12 18:20:10.355 [INFO][4201] ipam/ipam.go 158: Attempting to load block cidr=192.168.101.128/26 host="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:20:10.434441 containerd[1616]: 2025-12-12 18:20:10.359 [INFO][4201] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.101.128/26 host="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:20:10.434996 containerd[1616]: 2025-12-12 18:20:10.359 [INFO][4201] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.101.128/26 handle="k8s-pod-network.210543e581f67d59e33800361c6981ccf2374bea99b58249670a7363274da8e3" host="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:20:10.434996 containerd[1616]: 2025-12-12 18:20:10.363 [INFO][4201] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.210543e581f67d59e33800361c6981ccf2374bea99b58249670a7363274da8e3 Dec 12 18:20:10.434996 containerd[1616]: 2025-12-12 18:20:10.374 [INFO][4201] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.101.128/26 handle="k8s-pod-network.210543e581f67d59e33800361c6981ccf2374bea99b58249670a7363274da8e3" host="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:20:10.434996 containerd[1616]: 2025-12-12 18:20:10.382 [INFO][4201] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.101.130/26] block=192.168.101.128/26 handle="k8s-pod-network.210543e581f67d59e33800361c6981ccf2374bea99b58249670a7363274da8e3" host="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:20:10.434996 containerd[1616]: 2025-12-12 18:20:10.383 [INFO][4201] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.101.130/26] handle="k8s-pod-network.210543e581f67d59e33800361c6981ccf2374bea99b58249670a7363274da8e3" host="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:20:10.434996 containerd[1616]: 2025-12-12 18:20:10.384 [INFO][4201] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:20:10.434996 containerd[1616]: 2025-12-12 18:20:10.384 [INFO][4201] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.101.130/26] IPv6=[] ContainerID="210543e581f67d59e33800361c6981ccf2374bea99b58249670a7363274da8e3" HandleID="k8s-pod-network.210543e581f67d59e33800361c6981ccf2374bea99b58249670a7363274da8e3" Workload="ci--4515.1.0--f--8be9c60ab1-k8s-coredns--668d6bf9bc--bhjvt-eth0" Dec 12 18:20:10.435359 containerd[1616]: 2025-12-12 18:20:10.389 [INFO][4182] cni-plugin/k8s.go 418: Populated endpoint ContainerID="210543e581f67d59e33800361c6981ccf2374bea99b58249670a7363274da8e3" Namespace="kube-system" Pod="coredns-668d6bf9bc-bhjvt" WorkloadEndpoint="ci--4515.1.0--f--8be9c60ab1-k8s-coredns--668d6bf9bc--bhjvt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--f--8be9c60ab1-k8s-coredns--668d6bf9bc--bhjvt-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"516ed3cc-2563-4682-9bb4-937befb1cd30", ResourceVersion:"913", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 19, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-f-8be9c60ab1", ContainerID:"", Pod:"coredns-668d6bf9bc-bhjvt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.101.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali60c8507e1d2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:20:10.435359 containerd[1616]: 2025-12-12 18:20:10.389 [INFO][4182] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.101.130/32] ContainerID="210543e581f67d59e33800361c6981ccf2374bea99b58249670a7363274da8e3" Namespace="kube-system" Pod="coredns-668d6bf9bc-bhjvt" WorkloadEndpoint="ci--4515.1.0--f--8be9c60ab1-k8s-coredns--668d6bf9bc--bhjvt-eth0" Dec 12 18:20:10.435359 containerd[1616]: 2025-12-12 18:20:10.390 [INFO][4182] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60c8507e1d2 ContainerID="210543e581f67d59e33800361c6981ccf2374bea99b58249670a7363274da8e3" Namespace="kube-system" Pod="coredns-668d6bf9bc-bhjvt" WorkloadEndpoint="ci--4515.1.0--f--8be9c60ab1-k8s-coredns--668d6bf9bc--bhjvt-eth0" Dec 12 18:20:10.435359 containerd[1616]: 2025-12-12 18:20:10.400 [INFO][4182] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="210543e581f67d59e33800361c6981ccf2374bea99b58249670a7363274da8e3" Namespace="kube-system" Pod="coredns-668d6bf9bc-bhjvt" WorkloadEndpoint="ci--4515.1.0--f--8be9c60ab1-k8s-coredns--668d6bf9bc--bhjvt-eth0" Dec 12 18:20:10.435359 containerd[1616]: 2025-12-12 18:20:10.402 [INFO][4182] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="210543e581f67d59e33800361c6981ccf2374bea99b58249670a7363274da8e3" Namespace="kube-system" Pod="coredns-668d6bf9bc-bhjvt" WorkloadEndpoint="ci--4515.1.0--f--8be9c60ab1-k8s-coredns--668d6bf9bc--bhjvt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--f--8be9c60ab1-k8s-coredns--668d6bf9bc--bhjvt-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"516ed3cc-2563-4682-9bb4-937befb1cd30", ResourceVersion:"913", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 19, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-f-8be9c60ab1", ContainerID:"210543e581f67d59e33800361c6981ccf2374bea99b58249670a7363274da8e3", Pod:"coredns-668d6bf9bc-bhjvt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.101.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali60c8507e1d2", MAC:"de:b3:e2:27:7c:64", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:20:10.435359 containerd[1616]: 2025-12-12 18:20:10.420 [INFO][4182] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="210543e581f67d59e33800361c6981ccf2374bea99b58249670a7363274da8e3" Namespace="kube-system" Pod="coredns-668d6bf9bc-bhjvt" WorkloadEndpoint="ci--4515.1.0--f--8be9c60ab1-k8s-coredns--668d6bf9bc--bhjvt-eth0" Dec 12 18:20:10.514242 kubelet[2819]: E1212 18:20:10.514147 2819 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7565c6cc-lrgtt" podUID="f2c6d001-1096-4786-820b-c2f7a945bcac" Dec 12 18:20:10.515201 containerd[1616]: time="2025-12-12T18:20:10.514822477Z" level=info msg="connecting to shim 210543e581f67d59e33800361c6981ccf2374bea99b58249670a7363274da8e3" address="unix:///run/containerd/s/664876dde5f0a1e8477ad420354a1a10e84e1e79b0f5895fa1e36f6f1c79ae84" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:20:10.592320 systemd[1]: Started cri-containerd-210543e581f67d59e33800361c6981ccf2374bea99b58249670a7363274da8e3.scope - libcontainer container 210543e581f67d59e33800361c6981ccf2374bea99b58249670a7363274da8e3. Dec 12 18:20:10.617000 audit: BPF prog-id=203 op=LOAD Dec 12 18:20:10.619000 audit: BPF prog-id=204 op=LOAD Dec 12 18:20:10.619000 audit[4234]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=4223 pid=4234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:10.619000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231303534336535383166363764353965333338303033363163363938 Dec 12 18:20:10.619000 audit: BPF prog-id=204 op=UNLOAD Dec 12 18:20:10.619000 audit[4234]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4223 pid=4234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:10.619000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231303534336535383166363764353965333338303033363163363938 Dec 12 18:20:10.619000 audit: BPF prog-id=205 op=LOAD Dec 12 18:20:10.619000 audit[4234]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=4223 pid=4234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:10.619000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231303534336535383166363764353965333338303033363163363938 Dec 12 18:20:10.619000 audit: BPF prog-id=206 op=LOAD Dec 12 18:20:10.619000 audit[4234]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=4223 pid=4234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:10.619000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231303534336535383166363764353965333338303033363163363938 Dec 12 18:20:10.619000 audit: BPF prog-id=206 op=UNLOAD Dec 12 18:20:10.619000 audit[4234]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4223 pid=4234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:10.619000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231303534336535383166363764353965333338303033363163363938 Dec 12 18:20:10.619000 audit: BPF prog-id=205 op=UNLOAD Dec 12 18:20:10.619000 audit[4234]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4223 pid=4234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:10.619000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231303534336535383166363764353965333338303033363163363938 Dec 12 18:20:10.619000 audit: BPF prog-id=207 op=LOAD Dec 12 18:20:10.619000 audit[4234]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=4223 pid=4234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:10.619000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231303534336535383166363764353965333338303033363163363938 Dec 12 18:20:10.684693 containerd[1616]: time="2025-12-12T18:20:10.684605672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bhjvt,Uid:516ed3cc-2563-4682-9bb4-937befb1cd30,Namespace:kube-system,Attempt:0,} returns sandbox id \"210543e581f67d59e33800361c6981ccf2374bea99b58249670a7363274da8e3\"" Dec 12 18:20:10.686411 kubelet[2819]: E1212 18:20:10.686290 2819 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 12 18:20:10.691624 containerd[1616]: time="2025-12-12T18:20:10.691582318Z" level=info msg="CreateContainer within sandbox \"210543e581f67d59e33800361c6981ccf2374bea99b58249670a7363274da8e3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 18:20:10.704000 audit[4262]: NETFILTER_CFG table=filter:121 family=2 entries=20 op=nft_register_rule pid=4262 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:20:10.704000 audit[4262]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe88a29ac0 a2=0 a3=7ffe88a29aac items=0 ppid=2939 pid=4262 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:10.704000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:20:10.718386 containerd[1616]: time="2025-12-12T18:20:10.718190023Z" level=info msg="Container 508d20377ae09e26f555484f272085522f4214ec276886c2e6ccd1ad3bcbcf0f: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:20:10.725494 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4061230898.mount: Deactivated successfully. Dec 12 18:20:10.732000 audit[4262]: NETFILTER_CFG table=nat:122 family=2 entries=14 op=nft_register_rule pid=4262 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:20:10.732000 audit[4262]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffe88a29ac0 a2=0 a3=0 items=0 ppid=2939 pid=4262 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:10.732000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:20:10.741638 containerd[1616]: time="2025-12-12T18:20:10.741402216Z" level=info msg="CreateContainer within sandbox \"210543e581f67d59e33800361c6981ccf2374bea99b58249670a7363274da8e3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"508d20377ae09e26f555484f272085522f4214ec276886c2e6ccd1ad3bcbcf0f\"" Dec 12 18:20:10.743628 containerd[1616]: time="2025-12-12T18:20:10.743585643Z" level=info msg="StartContainer for \"508d20377ae09e26f555484f272085522f4214ec276886c2e6ccd1ad3bcbcf0f\"" Dec 12 18:20:10.747529 containerd[1616]: time="2025-12-12T18:20:10.747433948Z" level=info msg="connecting to shim 508d20377ae09e26f555484f272085522f4214ec276886c2e6ccd1ad3bcbcf0f" address="unix:///run/containerd/s/664876dde5f0a1e8477ad420354a1a10e84e1e79b0f5895fa1e36f6f1c79ae84" protocol=ttrpc version=3 Dec 12 18:20:10.786028 systemd[1]: Started cri-containerd-508d20377ae09e26f555484f272085522f4214ec276886c2e6ccd1ad3bcbcf0f.scope - libcontainer container 508d20377ae09e26f555484f272085522f4214ec276886c2e6ccd1ad3bcbcf0f. Dec 12 18:20:10.826000 audit: BPF prog-id=208 op=LOAD Dec 12 18:20:10.828000 audit: BPF prog-id=209 op=LOAD Dec 12 18:20:10.828000 audit[4263]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=4223 pid=4263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:10.828000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530386432303337376165303965323666353535343834663237323038 Dec 12 18:20:10.828000 audit: BPF prog-id=209 op=UNLOAD Dec 12 18:20:10.828000 audit[4263]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4223 pid=4263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:10.828000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530386432303337376165303965323666353535343834663237323038 Dec 12 18:20:10.828000 audit: BPF prog-id=210 op=LOAD Dec 12 18:20:10.828000 audit[4263]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=4223 pid=4263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:10.828000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530386432303337376165303965323666353535343834663237323038 Dec 12 18:20:10.828000 audit: BPF prog-id=211 op=LOAD Dec 12 18:20:10.828000 audit[4263]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=4223 pid=4263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:10.828000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530386432303337376165303965323666353535343834663237323038 Dec 12 18:20:10.828000 audit: BPF prog-id=211 op=UNLOAD Dec 12 18:20:10.828000 audit[4263]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4223 pid=4263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:10.828000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530386432303337376165303965323666353535343834663237323038 Dec 12 18:20:10.828000 audit: BPF prog-id=210 op=UNLOAD Dec 12 18:20:10.828000 audit[4263]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4223 pid=4263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:10.828000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530386432303337376165303965323666353535343834663237323038 Dec 12 18:20:10.828000 audit: BPF prog-id=212 op=LOAD Dec 12 18:20:10.828000 audit[4263]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=4223 pid=4263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:10.828000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530386432303337376165303965323666353535343834663237323038 Dec 12 18:20:10.869150 containerd[1616]: time="2025-12-12T18:20:10.869065559Z" level=info msg="StartContainer for \"508d20377ae09e26f555484f272085522f4214ec276886c2e6ccd1ad3bcbcf0f\" returns successfully" Dec 12 18:20:10.905000 audit: BPF prog-id=213 op=LOAD Dec 12 18:20:10.905000 audit[4194]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff6b9a6880 a2=94 a3=1 items=0 ppid=3949 pid=4194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:10.905000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:20:10.905000 audit: BPF prog-id=213 op=UNLOAD Dec 12 18:20:10.905000 audit[4194]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff6b9a6880 a2=94 a3=1 items=0 ppid=3949 pid=4194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:10.905000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:20:10.936000 audit: BPF prog-id=214 op=LOAD Dec 12 18:20:10.936000 audit[4194]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff6b9a6870 a2=94 a3=4 items=0 ppid=3949 pid=4194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:10.936000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:20:10.936000 audit: BPF prog-id=214 op=UNLOAD Dec 12 18:20:10.936000 audit[4194]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff6b9a6870 a2=0 a3=4 items=0 ppid=3949 pid=4194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:10.936000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:20:10.936000 audit: BPF prog-id=215 op=LOAD Dec 12 18:20:10.936000 audit[4194]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff6b9a66d0 a2=94 a3=5 items=0 ppid=3949 pid=4194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:10.936000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:20:10.936000 audit: BPF prog-id=215 op=UNLOAD Dec 12 18:20:10.936000 audit[4194]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff6b9a66d0 a2=0 a3=5 items=0 ppid=3949 pid=4194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:10.936000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:20:10.936000 audit: BPF prog-id=216 op=LOAD Dec 12 18:20:10.936000 audit[4194]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff6b9a68f0 a2=94 a3=6 items=0 ppid=3949 pid=4194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:10.936000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:20:10.936000 audit: BPF prog-id=216 op=UNLOAD Dec 12 18:20:10.936000 audit[4194]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff6b9a68f0 a2=0 a3=6 items=0 ppid=3949 pid=4194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:10.936000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:20:10.937000 audit: BPF prog-id=217 op=LOAD Dec 12 18:20:10.937000 audit[4194]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff6b9a60a0 a2=94 a3=88 items=0 ppid=3949 pid=4194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:10.937000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:20:10.937000 audit: BPF prog-id=218 op=LOAD Dec 12 18:20:10.937000 audit[4194]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7fff6b9a5f20 a2=94 a3=2 items=0 ppid=3949 pid=4194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:10.937000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:20:10.937000 audit: BPF prog-id=218 op=UNLOAD Dec 12 18:20:10.937000 audit[4194]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7fff6b9a5f50 a2=0 a3=7fff6b9a6050 items=0 ppid=3949 pid=4194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:10.937000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:20:10.938000 audit: BPF prog-id=217 op=UNLOAD Dec 12 18:20:10.938000 audit[4194]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=11b66d10 a2=0 a3=2704815d80c5419d items=0 ppid=3949 pid=4194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:10.938000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:20:10.949000 audit: BPF prog-id=199 op=UNLOAD Dec 12 18:20:10.949000 audit[3949]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c000846e40 a2=0 a3=0 items=0 ppid=3932 pid=3949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:10.949000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Dec 12 18:20:11.169000 audit[4314]: NETFILTER_CFG table=nat:123 family=2 entries=15 op=nft_register_chain pid=4314 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 18:20:11.169000 audit[4314]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffd51acd120 a2=0 a3=7ffd51acd10c items=0 ppid=3949 pid=4314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:11.169000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 18:20:11.213000 audit[4315]: NETFILTER_CFG table=raw:124 family=2 entries=21 op=nft_register_chain pid=4315 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 18:20:11.213000 audit[4315]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffea0a4d320 a2=0 a3=7ffea0a4d30c items=0 ppid=3949 pid=4315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:11.213000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 18:20:11.223000 audit[4320]: NETFILTER_CFG table=mangle:125 family=2 entries=16 op=nft_register_chain pid=4320 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 18:20:11.223000 audit[4320]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffdad8b3f20 a2=0 a3=7ffdad8b3f0c items=0 ppid=3949 pid=4320 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:11.223000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 18:20:11.242000 audit[4322]: NETFILTER_CFG table=filter:126 family=2 entries=94 op=nft_register_chain pid=4322 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 18:20:11.242000 audit[4322]: SYSCALL arch=c000003e syscall=46 success=yes exit=53116 a0=3 a1=7ffe9908e4c0 a2=0 a3=7ffe9908e4ac items=0 ppid=3949 pid=4322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:11.242000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 18:20:11.359000 audit[4332]: NETFILTER_CFG table=filter:127 family=2 entries=42 op=nft_register_chain pid=4332 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 18:20:11.359000 audit[4332]: SYSCALL arch=c000003e syscall=46 success=yes exit=22552 a0=3 a1=7ffcf1477340 a2=0 a3=7ffcf147732c items=0 ppid=3949 pid=4332 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:11.359000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 18:20:11.513636 kubelet[2819]: E1212 18:20:11.513451 2819 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 12 18:20:11.541511 kubelet[2819]: I1212 18:20:11.541367 2819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-bhjvt" podStartSLOduration=43.541340397 podStartE2EDuration="43.541340397s" podCreationTimestamp="2025-12-12 18:19:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:20:11.537750407 +0000 UTC m=+49.649718072" watchObservedRunningTime="2025-12-12 18:20:11.541340397 +0000 UTC m=+49.653308056" Dec 12 18:20:11.584712 kernel: kauditd_printk_skb: 278 callbacks suppressed Dec 12 18:20:11.584931 kernel: audit: type=1325 audit(1765563611.578:671): table=filter:128 family=2 entries=20 op=nft_register_rule pid=4335 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:20:11.578000 audit[4335]: NETFILTER_CFG table=filter:128 family=2 entries=20 op=nft_register_rule pid=4335 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:20:11.578000 audit[4335]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff4a063ea0 a2=0 a3=7fff4a063e8c items=0 ppid=2939 pid=4335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:11.586888 kernel: audit: type=1300 audit(1765563611.578:671): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff4a063ea0 a2=0 a3=7fff4a063e8c items=0 ppid=2939 pid=4335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:11.592689 kernel: audit: type=1327 audit(1765563611.578:671): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:20:11.578000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:20:11.588000 audit[4335]: NETFILTER_CFG table=nat:129 family=2 entries=14 op=nft_register_rule pid=4335 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:20:11.588000 audit[4335]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fff4a063ea0 a2=0 a3=0 items=0 ppid=2939 pid=4335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:11.603051 kernel: audit: type=1325 audit(1765563611.588:672): table=nat:129 family=2 entries=14 op=nft_register_rule pid=4335 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:20:11.603201 kernel: audit: type=1300 audit(1765563611.588:672): arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fff4a063ea0 a2=0 a3=0 items=0 ppid=2939 pid=4335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:11.588000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:20:11.613508 kernel: audit: type=1327 audit(1765563611.588:672): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:20:11.636000 audit[4337]: NETFILTER_CFG table=filter:130 family=2 entries=17 op=nft_register_rule pid=4337 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:20:11.643473 kernel: audit: type=1325 audit(1765563611.636:673): table=filter:130 family=2 entries=17 op=nft_register_rule pid=4337 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:20:11.647866 kernel: audit: type=1300 audit(1765563611.636:673): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffda4edc510 a2=0 a3=7ffda4edc4fc items=0 ppid=2939 pid=4337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:11.647938 kernel: audit: type=1327 audit(1765563611.636:673): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:20:11.636000 audit[4337]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffda4edc510 a2=0 a3=7ffda4edc4fc items=0 ppid=2939 pid=4337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:11.636000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:20:11.641684 systemd-networkd[1510]: cali60c8507e1d2: Gained IPv6LL Dec 12 18:20:11.647000 audit[4337]: NETFILTER_CFG table=nat:131 family=2 entries=35 op=nft_register_chain pid=4337 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:20:11.655676 kernel: audit: type=1325 audit(1765563611.647:674): table=nat:131 family=2 entries=35 op=nft_register_chain pid=4337 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:20:11.647000 audit[4337]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffda4edc510 a2=0 a3=7ffda4edc4fc items=0 ppid=2939 pid=4337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:11.647000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:20:11.958820 systemd-networkd[1510]: vxlan.calico: Gained IPv6LL Dec 12 18:20:12.128273 containerd[1616]: time="2025-12-12T18:20:12.127858221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f9c8d5fbb-p96pq,Uid:ed1bf1c8-646f-4c33-9642-90a577c1d786,Namespace:calico-apiserver,Attempt:0,}" Dec 12 18:20:12.131608 containerd[1616]: time="2025-12-12T18:20:12.131158306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9bb959468-v58pb,Uid:f4c646c7-47f1-433d-b7c4-005cccecda6a,Namespace:calico-apiserver,Attempt:0,}" Dec 12 18:20:12.131608 containerd[1616]: time="2025-12-12T18:20:12.131416511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9bb959468-57r44,Uid:ab0029a5-8491-42f1-b060-fef0c0422b49,Namespace:calico-apiserver,Attempt:0,}" Dec 12 18:20:12.131608 containerd[1616]: time="2025-12-12T18:20:12.131509721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7fcbf96c45-vldxn,Uid:88464bd3-9403-4901-97b2-3cffb941f328,Namespace:calico-system,Attempt:0,}" Dec 12 18:20:12.521753 kubelet[2819]: E1212 18:20:12.521705 2819 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 12 18:20:12.551429 systemd-networkd[1510]: cali9116ed1bafd: Link UP Dec 12 18:20:12.564473 systemd-networkd[1510]: cali9116ed1bafd: Gained carrier Dec 12 18:20:12.619874 containerd[1616]: 2025-12-12 18:20:12.260 [INFO][4338] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--f--8be9c60ab1-k8s-calico--apiserver--6f9c8d5fbb--p96pq-eth0 calico-apiserver-6f9c8d5fbb- calico-apiserver ed1bf1c8-646f-4c33-9642-90a577c1d786 916 0 2025-12-12 18:19:40 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6f9c8d5fbb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4515.1.0-f-8be9c60ab1 calico-apiserver-6f9c8d5fbb-p96pq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9116ed1bafd [] [] }} ContainerID="1f55a01a3f92d6320d826cf732eb397e7decb54be209fc2c5f3af55a9dedc265" Namespace="calico-apiserver" Pod="calico-apiserver-6f9c8d5fbb-p96pq" WorkloadEndpoint="ci--4515.1.0--f--8be9c60ab1-k8s-calico--apiserver--6f9c8d5fbb--p96pq-" Dec 12 18:20:12.619874 containerd[1616]: 2025-12-12 18:20:12.260 [INFO][4338] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1f55a01a3f92d6320d826cf732eb397e7decb54be209fc2c5f3af55a9dedc265" Namespace="calico-apiserver" Pod="calico-apiserver-6f9c8d5fbb-p96pq" WorkloadEndpoint="ci--4515.1.0--f--8be9c60ab1-k8s-calico--apiserver--6f9c8d5fbb--p96pq-eth0" Dec 12 18:20:12.619874 containerd[1616]: 2025-12-12 18:20:12.418 [INFO][4383] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1f55a01a3f92d6320d826cf732eb397e7decb54be209fc2c5f3af55a9dedc265" HandleID="k8s-pod-network.1f55a01a3f92d6320d826cf732eb397e7decb54be209fc2c5f3af55a9dedc265" Workload="ci--4515.1.0--f--8be9c60ab1-k8s-calico--apiserver--6f9c8d5fbb--p96pq-eth0" Dec 12 18:20:12.619874 containerd[1616]: 2025-12-12 18:20:12.418 [INFO][4383] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1f55a01a3f92d6320d826cf732eb397e7decb54be209fc2c5f3af55a9dedc265" HandleID="k8s-pod-network.1f55a01a3f92d6320d826cf732eb397e7decb54be209fc2c5f3af55a9dedc265" Workload="ci--4515.1.0--f--8be9c60ab1-k8s-calico--apiserver--6f9c8d5fbb--p96pq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005dca90), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4515.1.0-f-8be9c60ab1", "pod":"calico-apiserver-6f9c8d5fbb-p96pq", "timestamp":"2025-12-12 18:20:12.41853858 +0000 UTC"}, Hostname:"ci-4515.1.0-f-8be9c60ab1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:20:12.619874 containerd[1616]: 2025-12-12 18:20:12.418 [INFO][4383] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:20:12.619874 containerd[1616]: 2025-12-12 18:20:12.418 [INFO][4383] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:20:12.619874 containerd[1616]: 2025-12-12 18:20:12.418 [INFO][4383] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-f-8be9c60ab1' Dec 12 18:20:12.619874 containerd[1616]: 2025-12-12 18:20:12.433 [INFO][4383] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1f55a01a3f92d6320d826cf732eb397e7decb54be209fc2c5f3af55a9dedc265" host="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:20:12.619874 containerd[1616]: 2025-12-12 18:20:12.449 [INFO][4383] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:20:12.619874 containerd[1616]: 2025-12-12 18:20:12.462 [INFO][4383] ipam/ipam.go 511: Trying affinity for 192.168.101.128/26 host="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:20:12.619874 containerd[1616]: 2025-12-12 18:20:12.469 [INFO][4383] ipam/ipam.go 158: Attempting to load block cidr=192.168.101.128/26 host="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:20:12.619874 containerd[1616]: 2025-12-12 18:20:12.482 [INFO][4383] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.101.128/26 host="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:20:12.619874 containerd[1616]: 2025-12-12 18:20:12.483 [INFO][4383] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.101.128/26 handle="k8s-pod-network.1f55a01a3f92d6320d826cf732eb397e7decb54be209fc2c5f3af55a9dedc265" host="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:20:12.619874 containerd[1616]: 2025-12-12 18:20:12.489 [INFO][4383] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1f55a01a3f92d6320d826cf732eb397e7decb54be209fc2c5f3af55a9dedc265 Dec 12 18:20:12.619874 containerd[1616]: 2025-12-12 18:20:12.497 [INFO][4383] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.101.128/26 handle="k8s-pod-network.1f55a01a3f92d6320d826cf732eb397e7decb54be209fc2c5f3af55a9dedc265" host="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:20:12.619874 containerd[1616]: 2025-12-12 18:20:12.511 [INFO][4383] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.101.131/26] block=192.168.101.128/26 handle="k8s-pod-network.1f55a01a3f92d6320d826cf732eb397e7decb54be209fc2c5f3af55a9dedc265" host="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:20:12.619874 containerd[1616]: 2025-12-12 18:20:12.511 [INFO][4383] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.101.131/26] handle="k8s-pod-network.1f55a01a3f92d6320d826cf732eb397e7decb54be209fc2c5f3af55a9dedc265" host="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:20:12.619874 containerd[1616]: 2025-12-12 18:20:12.512 [INFO][4383] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:20:12.619874 containerd[1616]: 2025-12-12 18:20:12.512 [INFO][4383] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.101.131/26] IPv6=[] ContainerID="1f55a01a3f92d6320d826cf732eb397e7decb54be209fc2c5f3af55a9dedc265" HandleID="k8s-pod-network.1f55a01a3f92d6320d826cf732eb397e7decb54be209fc2c5f3af55a9dedc265" Workload="ci--4515.1.0--f--8be9c60ab1-k8s-calico--apiserver--6f9c8d5fbb--p96pq-eth0" Dec 12 18:20:12.622067 containerd[1616]: 2025-12-12 18:20:12.523 [INFO][4338] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1f55a01a3f92d6320d826cf732eb397e7decb54be209fc2c5f3af55a9dedc265" Namespace="calico-apiserver" Pod="calico-apiserver-6f9c8d5fbb-p96pq" WorkloadEndpoint="ci--4515.1.0--f--8be9c60ab1-k8s-calico--apiserver--6f9c8d5fbb--p96pq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--f--8be9c60ab1-k8s-calico--apiserver--6f9c8d5fbb--p96pq-eth0", GenerateName:"calico-apiserver-6f9c8d5fbb-", Namespace:"calico-apiserver", SelfLink:"", UID:"ed1bf1c8-646f-4c33-9642-90a577c1d786", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 19, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f9c8d5fbb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-f-8be9c60ab1", ContainerID:"", Pod:"calico-apiserver-6f9c8d5fbb-p96pq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.101.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9116ed1bafd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:20:12.622067 containerd[1616]: 2025-12-12 18:20:12.524 [INFO][4338] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.101.131/32] ContainerID="1f55a01a3f92d6320d826cf732eb397e7decb54be209fc2c5f3af55a9dedc265" Namespace="calico-apiserver" Pod="calico-apiserver-6f9c8d5fbb-p96pq" WorkloadEndpoint="ci--4515.1.0--f--8be9c60ab1-k8s-calico--apiserver--6f9c8d5fbb--p96pq-eth0" Dec 12 18:20:12.622067 containerd[1616]: 2025-12-12 18:20:12.525 [INFO][4338] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9116ed1bafd ContainerID="1f55a01a3f92d6320d826cf732eb397e7decb54be209fc2c5f3af55a9dedc265" Namespace="calico-apiserver" Pod="calico-apiserver-6f9c8d5fbb-p96pq" WorkloadEndpoint="ci--4515.1.0--f--8be9c60ab1-k8s-calico--apiserver--6f9c8d5fbb--p96pq-eth0" Dec 12 18:20:12.622067 containerd[1616]: 2025-12-12 18:20:12.571 [INFO][4338] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1f55a01a3f92d6320d826cf732eb397e7decb54be209fc2c5f3af55a9dedc265" Namespace="calico-apiserver" Pod="calico-apiserver-6f9c8d5fbb-p96pq" WorkloadEndpoint="ci--4515.1.0--f--8be9c60ab1-k8s-calico--apiserver--6f9c8d5fbb--p96pq-eth0" Dec 12 18:20:12.622067 containerd[1616]: 2025-12-12 18:20:12.573 [INFO][4338] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1f55a01a3f92d6320d826cf732eb397e7decb54be209fc2c5f3af55a9dedc265" Namespace="calico-apiserver" Pod="calico-apiserver-6f9c8d5fbb-p96pq" WorkloadEndpoint="ci--4515.1.0--f--8be9c60ab1-k8s-calico--apiserver--6f9c8d5fbb--p96pq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--f--8be9c60ab1-k8s-calico--apiserver--6f9c8d5fbb--p96pq-eth0", GenerateName:"calico-apiserver-6f9c8d5fbb-", Namespace:"calico-apiserver", SelfLink:"", UID:"ed1bf1c8-646f-4c33-9642-90a577c1d786", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 19, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f9c8d5fbb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-f-8be9c60ab1", ContainerID:"1f55a01a3f92d6320d826cf732eb397e7decb54be209fc2c5f3af55a9dedc265", Pod:"calico-apiserver-6f9c8d5fbb-p96pq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.101.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9116ed1bafd", MAC:"62:04:bf:9b:dc:79", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:20:12.622067 containerd[1616]: 2025-12-12 18:20:12.603 [INFO][4338] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1f55a01a3f92d6320d826cf732eb397e7decb54be209fc2c5f3af55a9dedc265" Namespace="calico-apiserver" Pod="calico-apiserver-6f9c8d5fbb-p96pq" WorkloadEndpoint="ci--4515.1.0--f--8be9c60ab1-k8s-calico--apiserver--6f9c8d5fbb--p96pq-eth0" Dec 12 18:20:12.704273 systemd-networkd[1510]: cali1c24635cc5e: Link UP Dec 12 18:20:12.715474 systemd-networkd[1510]: cali1c24635cc5e: Gained carrier Dec 12 18:20:12.738191 containerd[1616]: time="2025-12-12T18:20:12.737762730Z" level=info msg="connecting to shim 1f55a01a3f92d6320d826cf732eb397e7decb54be209fc2c5f3af55a9dedc265" address="unix:///run/containerd/s/6b407b086a051106ef4c549ec10dcfa2fb5211a5d5d501bd72861417285a81fc" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:20:12.793831 containerd[1616]: 2025-12-12 18:20:12.327 [INFO][4347] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--f--8be9c60ab1-k8s-calico--apiserver--9bb959468--v58pb-eth0 calico-apiserver-9bb959468- calico-apiserver f4c646c7-47f1-433d-b7c4-005cccecda6a 912 0 2025-12-12 18:19:39 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:9bb959468 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4515.1.0-f-8be9c60ab1 calico-apiserver-9bb959468-v58pb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1c24635cc5e [] [] }} ContainerID="6b1f0b660a5832a9ccd9012e30d784f4c5ae32e981850f5bc6fa565747d8e476" Namespace="calico-apiserver" Pod="calico-apiserver-9bb959468-v58pb" WorkloadEndpoint="ci--4515.1.0--f--8be9c60ab1-k8s-calico--apiserver--9bb959468--v58pb-" Dec 12 18:20:12.793831 containerd[1616]: 2025-12-12 18:20:12.332 [INFO][4347] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6b1f0b660a5832a9ccd9012e30d784f4c5ae32e981850f5bc6fa565747d8e476" Namespace="calico-apiserver" Pod="calico-apiserver-9bb959468-v58pb" WorkloadEndpoint="ci--4515.1.0--f--8be9c60ab1-k8s-calico--apiserver--9bb959468--v58pb-eth0" Dec 12 18:20:12.793831 containerd[1616]: 2025-12-12 18:20:12.422 [INFO][4396] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6b1f0b660a5832a9ccd9012e30d784f4c5ae32e981850f5bc6fa565747d8e476" HandleID="k8s-pod-network.6b1f0b660a5832a9ccd9012e30d784f4c5ae32e981850f5bc6fa565747d8e476" Workload="ci--4515.1.0--f--8be9c60ab1-k8s-calico--apiserver--9bb959468--v58pb-eth0" Dec 12 18:20:12.793831 containerd[1616]: 2025-12-12 18:20:12.422 [INFO][4396] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6b1f0b660a5832a9ccd9012e30d784f4c5ae32e981850f5bc6fa565747d8e476" HandleID="k8s-pod-network.6b1f0b660a5832a9ccd9012e30d784f4c5ae32e981850f5bc6fa565747d8e476" Workload="ci--4515.1.0--f--8be9c60ab1-k8s-calico--apiserver--9bb959468--v58pb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00039cce0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4515.1.0-f-8be9c60ab1", "pod":"calico-apiserver-9bb959468-v58pb", "timestamp":"2025-12-12 18:20:12.422598395 +0000 UTC"}, Hostname:"ci-4515.1.0-f-8be9c60ab1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:20:12.793831 containerd[1616]: 2025-12-12 18:20:12.423 [INFO][4396] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:20:12.793831 containerd[1616]: 2025-12-12 18:20:12.512 [INFO][4396] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:20:12.793831 containerd[1616]: 2025-12-12 18:20:12.512 [INFO][4396] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-f-8be9c60ab1' Dec 12 18:20:12.793831 containerd[1616]: 2025-12-12 18:20:12.550 [INFO][4396] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6b1f0b660a5832a9ccd9012e30d784f4c5ae32e981850f5bc6fa565747d8e476" host="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:20:12.793831 containerd[1616]: 2025-12-12 18:20:12.586 [INFO][4396] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:20:12.793831 containerd[1616]: 2025-12-12 18:20:12.605 [INFO][4396] ipam/ipam.go 511: Trying affinity for 192.168.101.128/26 host="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:20:12.793831 containerd[1616]: 2025-12-12 18:20:12.618 [INFO][4396] ipam/ipam.go 158: Attempting to load block cidr=192.168.101.128/26 host="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:20:12.793831 containerd[1616]: 2025-12-12 18:20:12.624 [INFO][4396] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.101.128/26 host="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:20:12.793831 containerd[1616]: 2025-12-12 18:20:12.624 [INFO][4396] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.101.128/26 handle="k8s-pod-network.6b1f0b660a5832a9ccd9012e30d784f4c5ae32e981850f5bc6fa565747d8e476" host="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:20:12.793831 containerd[1616]: 2025-12-12 18:20:12.629 [INFO][4396] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6b1f0b660a5832a9ccd9012e30d784f4c5ae32e981850f5bc6fa565747d8e476 Dec 12 18:20:12.793831 containerd[1616]: 2025-12-12 18:20:12.638 [INFO][4396] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.101.128/26 handle="k8s-pod-network.6b1f0b660a5832a9ccd9012e30d784f4c5ae32e981850f5bc6fa565747d8e476" host="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:20:12.793831 containerd[1616]: 2025-12-12 18:20:12.659 [INFO][4396] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.101.132/26] block=192.168.101.128/26 handle="k8s-pod-network.6b1f0b660a5832a9ccd9012e30d784f4c5ae32e981850f5bc6fa565747d8e476" host="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:20:12.793831 containerd[1616]: 2025-12-12 18:20:12.660 [INFO][4396] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.101.132/26] handle="k8s-pod-network.6b1f0b660a5832a9ccd9012e30d784f4c5ae32e981850f5bc6fa565747d8e476" host="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:20:12.793831 containerd[1616]: 2025-12-12 18:20:12.660 [INFO][4396] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:20:12.793831 containerd[1616]: 2025-12-12 18:20:12.661 [INFO][4396] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.101.132/26] IPv6=[] ContainerID="6b1f0b660a5832a9ccd9012e30d784f4c5ae32e981850f5bc6fa565747d8e476" HandleID="k8s-pod-network.6b1f0b660a5832a9ccd9012e30d784f4c5ae32e981850f5bc6fa565747d8e476" Workload="ci--4515.1.0--f--8be9c60ab1-k8s-calico--apiserver--9bb959468--v58pb-eth0" Dec 12 18:20:12.799810 containerd[1616]: 2025-12-12 18:20:12.669 [INFO][4347] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6b1f0b660a5832a9ccd9012e30d784f4c5ae32e981850f5bc6fa565747d8e476" Namespace="calico-apiserver" Pod="calico-apiserver-9bb959468-v58pb" WorkloadEndpoint="ci--4515.1.0--f--8be9c60ab1-k8s-calico--apiserver--9bb959468--v58pb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--f--8be9c60ab1-k8s-calico--apiserver--9bb959468--v58pb-eth0", GenerateName:"calico-apiserver-9bb959468-", Namespace:"calico-apiserver", SelfLink:"", UID:"f4c646c7-47f1-433d-b7c4-005cccecda6a", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 19, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9bb959468", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-f-8be9c60ab1", ContainerID:"", Pod:"calico-apiserver-9bb959468-v58pb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.101.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1c24635cc5e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:20:12.799810 containerd[1616]: 2025-12-12 18:20:12.671 [INFO][4347] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.101.132/32] ContainerID="6b1f0b660a5832a9ccd9012e30d784f4c5ae32e981850f5bc6fa565747d8e476" Namespace="calico-apiserver" Pod="calico-apiserver-9bb959468-v58pb" WorkloadEndpoint="ci--4515.1.0--f--8be9c60ab1-k8s-calico--apiserver--9bb959468--v58pb-eth0" Dec 12 18:20:12.799810 containerd[1616]: 2025-12-12 18:20:12.671 [INFO][4347] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1c24635cc5e ContainerID="6b1f0b660a5832a9ccd9012e30d784f4c5ae32e981850f5bc6fa565747d8e476" Namespace="calico-apiserver" Pod="calico-apiserver-9bb959468-v58pb" WorkloadEndpoint="ci--4515.1.0--f--8be9c60ab1-k8s-calico--apiserver--9bb959468--v58pb-eth0" Dec 12 18:20:12.799810 containerd[1616]: 2025-12-12 18:20:12.720 [INFO][4347] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6b1f0b660a5832a9ccd9012e30d784f4c5ae32e981850f5bc6fa565747d8e476" Namespace="calico-apiserver" Pod="calico-apiserver-9bb959468-v58pb" WorkloadEndpoint="ci--4515.1.0--f--8be9c60ab1-k8s-calico--apiserver--9bb959468--v58pb-eth0" Dec 12 18:20:12.799810 containerd[1616]: 2025-12-12 18:20:12.722 [INFO][4347] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6b1f0b660a5832a9ccd9012e30d784f4c5ae32e981850f5bc6fa565747d8e476" Namespace="calico-apiserver" Pod="calico-apiserver-9bb959468-v58pb" WorkloadEndpoint="ci--4515.1.0--f--8be9c60ab1-k8s-calico--apiserver--9bb959468--v58pb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--f--8be9c60ab1-k8s-calico--apiserver--9bb959468--v58pb-eth0", GenerateName:"calico-apiserver-9bb959468-", Namespace:"calico-apiserver", SelfLink:"", UID:"f4c646c7-47f1-433d-b7c4-005cccecda6a", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 19, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9bb959468", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-f-8be9c60ab1", ContainerID:"6b1f0b660a5832a9ccd9012e30d784f4c5ae32e981850f5bc6fa565747d8e476", Pod:"calico-apiserver-9bb959468-v58pb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.101.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1c24635cc5e", MAC:"ba:58:e4:ce:61:76", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:20:12.799810 containerd[1616]: 2025-12-12 18:20:12.768 [INFO][4347] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6b1f0b660a5832a9ccd9012e30d784f4c5ae32e981850f5bc6fa565747d8e476" Namespace="calico-apiserver" Pod="calico-apiserver-9bb959468-v58pb" WorkloadEndpoint="ci--4515.1.0--f--8be9c60ab1-k8s-calico--apiserver--9bb959468--v58pb-eth0" Dec 12 18:20:12.848951 systemd[1]: Started cri-containerd-1f55a01a3f92d6320d826cf732eb397e7decb54be209fc2c5f3af55a9dedc265.scope - libcontainer container 1f55a01a3f92d6320d826cf732eb397e7decb54be209fc2c5f3af55a9dedc265. Dec 12 18:20:12.867254 systemd-networkd[1510]: cali9107fa1cf7a: Link UP Dec 12 18:20:12.873659 systemd-networkd[1510]: cali9107fa1cf7a: Gained carrier Dec 12 18:20:12.925305 containerd[1616]: time="2025-12-12T18:20:12.923910083Z" level=info msg="connecting to shim 6b1f0b660a5832a9ccd9012e30d784f4c5ae32e981850f5bc6fa565747d8e476" address="unix:///run/containerd/s/bb9d8e3eca6951b419e8debffc97fbbb101d4a09a51a06f2289327064eaa515d" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:20:12.945505 containerd[1616]: 2025-12-12 18:20:12.338 [INFO][4359] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--f--8be9c60ab1-k8s-calico--apiserver--9bb959468--57r44-eth0 calico-apiserver-9bb959468- calico-apiserver ab0029a5-8491-42f1-b060-fef0c0422b49 910 0 2025-12-12 18:19:39 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:9bb959468 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4515.1.0-f-8be9c60ab1 calico-apiserver-9bb959468-57r44 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9107fa1cf7a [] [] }} ContainerID="7e79d225b6181addafdfe74322517b392eca6a4a555f6b0d5a4926f20489855b" Namespace="calico-apiserver" Pod="calico-apiserver-9bb959468-57r44" WorkloadEndpoint="ci--4515.1.0--f--8be9c60ab1-k8s-calico--apiserver--9bb959468--57r44-" Dec 12 18:20:12.945505 containerd[1616]: 2025-12-12 18:20:12.338 [INFO][4359] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7e79d225b6181addafdfe74322517b392eca6a4a555f6b0d5a4926f20489855b" Namespace="calico-apiserver" Pod="calico-apiserver-9bb959468-57r44" WorkloadEndpoint="ci--4515.1.0--f--8be9c60ab1-k8s-calico--apiserver--9bb959468--57r44-eth0" Dec 12 18:20:12.945505 containerd[1616]: 2025-12-12 18:20:12.461 [INFO][4397] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7e79d225b6181addafdfe74322517b392eca6a4a555f6b0d5a4926f20489855b" HandleID="k8s-pod-network.7e79d225b6181addafdfe74322517b392eca6a4a555f6b0d5a4926f20489855b" Workload="ci--4515.1.0--f--8be9c60ab1-k8s-calico--apiserver--9bb959468--57r44-eth0" Dec 12 18:20:12.945505 containerd[1616]: 2025-12-12 18:20:12.462 [INFO][4397] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7e79d225b6181addafdfe74322517b392eca6a4a555f6b0d5a4926f20489855b" HandleID="k8s-pod-network.7e79d225b6181addafdfe74322517b392eca6a4a555f6b0d5a4926f20489855b" Workload="ci--4515.1.0--f--8be9c60ab1-k8s-calico--apiserver--9bb959468--57r44-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d59d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4515.1.0-f-8be9c60ab1", "pod":"calico-apiserver-9bb959468-57r44", "timestamp":"2025-12-12 18:20:12.461751923 +0000 UTC"}, Hostname:"ci-4515.1.0-f-8be9c60ab1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:20:12.945505 containerd[1616]: 2025-12-12 18:20:12.462 [INFO][4397] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:20:12.945505 containerd[1616]: 2025-12-12 18:20:12.661 [INFO][4397] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:20:12.945505 containerd[1616]: 2025-12-12 18:20:12.662 [INFO][4397] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-f-8be9c60ab1' Dec 12 18:20:12.945505 containerd[1616]: 2025-12-12 18:20:12.702 [INFO][4397] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7e79d225b6181addafdfe74322517b392eca6a4a555f6b0d5a4926f20489855b" host="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:20:12.945505 containerd[1616]: 2025-12-12 18:20:12.735 [INFO][4397] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:20:12.945505 containerd[1616]: 2025-12-12 18:20:12.746 [INFO][4397] ipam/ipam.go 511: Trying affinity for 192.168.101.128/26 host="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:20:12.945505 containerd[1616]: 2025-12-12 18:20:12.761 [INFO][4397] ipam/ipam.go 158: Attempting to load block cidr=192.168.101.128/26 host="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:20:12.945505 containerd[1616]: 2025-12-12 18:20:12.770 [INFO][4397] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.101.128/26 host="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:20:12.945505 containerd[1616]: 2025-12-12 18:20:12.770 [INFO][4397] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.101.128/26 handle="k8s-pod-network.7e79d225b6181addafdfe74322517b392eca6a4a555f6b0d5a4926f20489855b" host="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:20:12.945505 containerd[1616]: 2025-12-12 18:20:12.774 [INFO][4397] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7e79d225b6181addafdfe74322517b392eca6a4a555f6b0d5a4926f20489855b Dec 12 18:20:12.945505 containerd[1616]: 2025-12-12 18:20:12.804 [INFO][4397] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.101.128/26 handle="k8s-pod-network.7e79d225b6181addafdfe74322517b392eca6a4a555f6b0d5a4926f20489855b" host="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:20:12.945505 containerd[1616]: 2025-12-12 18:20:12.830 [INFO][4397] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.101.133/26] block=192.168.101.128/26 handle="k8s-pod-network.7e79d225b6181addafdfe74322517b392eca6a4a555f6b0d5a4926f20489855b" host="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:20:12.945505 containerd[1616]: 2025-12-12 18:20:12.831 [INFO][4397] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.101.133/26] handle="k8s-pod-network.7e79d225b6181addafdfe74322517b392eca6a4a555f6b0d5a4926f20489855b" host="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:20:12.945505 containerd[1616]: 2025-12-12 18:20:12.831 [INFO][4397] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:20:12.945505 containerd[1616]: 2025-12-12 18:20:12.831 [INFO][4397] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.101.133/26] IPv6=[] ContainerID="7e79d225b6181addafdfe74322517b392eca6a4a555f6b0d5a4926f20489855b" HandleID="k8s-pod-network.7e79d225b6181addafdfe74322517b392eca6a4a555f6b0d5a4926f20489855b" Workload="ci--4515.1.0--f--8be9c60ab1-k8s-calico--apiserver--9bb959468--57r44-eth0" Dec 12 18:20:12.948279 containerd[1616]: 2025-12-12 18:20:12.847 [INFO][4359] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7e79d225b6181addafdfe74322517b392eca6a4a555f6b0d5a4926f20489855b" Namespace="calico-apiserver" Pod="calico-apiserver-9bb959468-57r44" WorkloadEndpoint="ci--4515.1.0--f--8be9c60ab1-k8s-calico--apiserver--9bb959468--57r44-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--f--8be9c60ab1-k8s-calico--apiserver--9bb959468--57r44-eth0", GenerateName:"calico-apiserver-9bb959468-", Namespace:"calico-apiserver", SelfLink:"", UID:"ab0029a5-8491-42f1-b060-fef0c0422b49", ResourceVersion:"910", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 19, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9bb959468", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-f-8be9c60ab1", ContainerID:"", Pod:"calico-apiserver-9bb959468-57r44", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.101.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9107fa1cf7a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:20:12.948279 containerd[1616]: 2025-12-12 18:20:12.848 [INFO][4359] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.101.133/32] ContainerID="7e79d225b6181addafdfe74322517b392eca6a4a555f6b0d5a4926f20489855b" Namespace="calico-apiserver" Pod="calico-apiserver-9bb959468-57r44" WorkloadEndpoint="ci--4515.1.0--f--8be9c60ab1-k8s-calico--apiserver--9bb959468--57r44-eth0" Dec 12 18:20:12.948279 containerd[1616]: 2025-12-12 18:20:12.848 [INFO][4359] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9107fa1cf7a ContainerID="7e79d225b6181addafdfe74322517b392eca6a4a555f6b0d5a4926f20489855b" Namespace="calico-apiserver" Pod="calico-apiserver-9bb959468-57r44" WorkloadEndpoint="ci--4515.1.0--f--8be9c60ab1-k8s-calico--apiserver--9bb959468--57r44-eth0" Dec 12 18:20:12.948279 containerd[1616]: 2025-12-12 18:20:12.872 [INFO][4359] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7e79d225b6181addafdfe74322517b392eca6a4a555f6b0d5a4926f20489855b" Namespace="calico-apiserver" Pod="calico-apiserver-9bb959468-57r44" WorkloadEndpoint="ci--4515.1.0--f--8be9c60ab1-k8s-calico--apiserver--9bb959468--57r44-eth0" Dec 12 18:20:12.948279 containerd[1616]: 2025-12-12 18:20:12.873 [INFO][4359] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7e79d225b6181addafdfe74322517b392eca6a4a555f6b0d5a4926f20489855b" Namespace="calico-apiserver" Pod="calico-apiserver-9bb959468-57r44" WorkloadEndpoint="ci--4515.1.0--f--8be9c60ab1-k8s-calico--apiserver--9bb959468--57r44-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--f--8be9c60ab1-k8s-calico--apiserver--9bb959468--57r44-eth0", GenerateName:"calico-apiserver-9bb959468-", Namespace:"calico-apiserver", SelfLink:"", UID:"ab0029a5-8491-42f1-b060-fef0c0422b49", ResourceVersion:"910", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 19, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9bb959468", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-f-8be9c60ab1", ContainerID:"7e79d225b6181addafdfe74322517b392eca6a4a555f6b0d5a4926f20489855b", Pod:"calico-apiserver-9bb959468-57r44", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.101.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9107fa1cf7a", MAC:"1a:3b:80:de:ea:bd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:20:12.948279 containerd[1616]: 2025-12-12 18:20:12.927 [INFO][4359] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7e79d225b6181addafdfe74322517b392eca6a4a555f6b0d5a4926f20489855b" Namespace="calico-apiserver" Pod="calico-apiserver-9bb959468-57r44" WorkloadEndpoint="ci--4515.1.0--f--8be9c60ab1-k8s-calico--apiserver--9bb959468--57r44-eth0" Dec 12 18:20:13.059915 systemd[1]: Started cri-containerd-6b1f0b660a5832a9ccd9012e30d784f4c5ae32e981850f5bc6fa565747d8e476.scope - libcontainer container 6b1f0b660a5832a9ccd9012e30d784f4c5ae32e981850f5bc6fa565747d8e476. Dec 12 18:20:13.077417 containerd[1616]: time="2025-12-12T18:20:13.077215427Z" level=info msg="connecting to shim 7e79d225b6181addafdfe74322517b392eca6a4a555f6b0d5a4926f20489855b" address="unix:///run/containerd/s/25182b9fc6daf1fb8f3223d8999ccf7d0bb696dce8e90fc2b9bc7850dbb96173" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:20:13.124000 audit: BPF prog-id=219 op=LOAD Dec 12 18:20:13.127182 kubelet[2819]: E1212 18:20:13.127143 2819 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 12 18:20:13.128934 containerd[1616]: time="2025-12-12T18:20:13.128886285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-q4kjx,Uid:f4950266-b324-4bd8-9271-ead6b00ca6f0,Namespace:kube-system,Attempt:0,}" Dec 12 18:20:13.129748 containerd[1616]: time="2025-12-12T18:20:13.129468622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-h7lvc,Uid:c57efa3a-e82c-436b-9c07-8cf6921dcd5d,Namespace:calico-system,Attempt:0,}" Dec 12 18:20:13.130000 audit: BPF prog-id=220 op=LOAD Dec 12 18:20:13.130000 audit[4450]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4437 pid=4450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:13.130000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166353561303161336639326436333230643832366366373332656233 Dec 12 18:20:13.134000 audit: BPF prog-id=220 op=UNLOAD Dec 12 18:20:13.134000 audit[4450]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4437 pid=4450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:13.134000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166353561303161336639326436333230643832366366373332656233 Dec 12 18:20:13.134000 audit: BPF prog-id=221 op=LOAD Dec 12 18:20:13.134000 audit[4450]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4437 pid=4450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:13.134000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166353561303161336639326436333230643832366366373332656233 Dec 12 18:20:13.135000 audit: BPF prog-id=222 op=LOAD Dec 12 18:20:13.135000 audit[4450]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4437 pid=4450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:13.135000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166353561303161336639326436333230643832366366373332656233 Dec 12 18:20:13.136000 audit: BPF prog-id=222 op=UNLOAD Dec 12 18:20:13.136000 audit[4450]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4437 pid=4450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:13.136000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166353561303161336639326436333230643832366366373332656233 Dec 12 18:20:13.136000 audit: BPF prog-id=221 op=UNLOAD Dec 12 18:20:13.136000 audit[4450]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4437 pid=4450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:13.136000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166353561303161336639326436333230643832366366373332656233 Dec 12 18:20:13.136000 audit: BPF prog-id=223 op=LOAD Dec 12 18:20:13.136000 audit[4450]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4437 pid=4450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:13.136000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166353561303161336639326436333230643832366366373332656233 Dec 12 18:20:13.238833 systemd[1]: Started cri-containerd-7e79d225b6181addafdfe74322517b392eca6a4a555f6b0d5a4926f20489855b.scope - libcontainer container 7e79d225b6181addafdfe74322517b392eca6a4a555f6b0d5a4926f20489855b. Dec 12 18:20:13.293801 systemd-networkd[1510]: cali24dec623eff: Link UP Dec 12 18:20:13.298875 systemd-networkd[1510]: cali24dec623eff: Gained carrier Dec 12 18:20:13.357556 containerd[1616]: 2025-12-12 18:20:12.393 [INFO][4370] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--f--8be9c60ab1-k8s-calico--kube--controllers--7fcbf96c45--vldxn-eth0 calico-kube-controllers-7fcbf96c45- calico-system 88464bd3-9403-4901-97b2-3cffb941f328 900 0 2025-12-12 18:19:47 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7fcbf96c45 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4515.1.0-f-8be9c60ab1 calico-kube-controllers-7fcbf96c45-vldxn eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali24dec623eff [] [] }} ContainerID="1aa28367aae464bf733925c7b7408774576288c519454a427a685a5a5675c977" Namespace="calico-system" Pod="calico-kube-controllers-7fcbf96c45-vldxn" WorkloadEndpoint="ci--4515.1.0--f--8be9c60ab1-k8s-calico--kube--controllers--7fcbf96c45--vldxn-" Dec 12 18:20:13.357556 containerd[1616]: 2025-12-12 18:20:12.396 [INFO][4370] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1aa28367aae464bf733925c7b7408774576288c519454a427a685a5a5675c977" Namespace="calico-system" Pod="calico-kube-controllers-7fcbf96c45-vldxn" WorkloadEndpoint="ci--4515.1.0--f--8be9c60ab1-k8s-calico--kube--controllers--7fcbf96c45--vldxn-eth0" Dec 12 18:20:13.357556 containerd[1616]: 2025-12-12 18:20:12.477 [INFO][4408] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1aa28367aae464bf733925c7b7408774576288c519454a427a685a5a5675c977" HandleID="k8s-pod-network.1aa28367aae464bf733925c7b7408774576288c519454a427a685a5a5675c977" Workload="ci--4515.1.0--f--8be9c60ab1-k8s-calico--kube--controllers--7fcbf96c45--vldxn-eth0" Dec 12 18:20:13.357556 containerd[1616]: 2025-12-12 18:20:12.478 [INFO][4408] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1aa28367aae464bf733925c7b7408774576288c519454a427a685a5a5675c977" HandleID="k8s-pod-network.1aa28367aae464bf733925c7b7408774576288c519454a427a685a5a5675c977" Workload="ci--4515.1.0--f--8be9c60ab1-k8s-calico--kube--controllers--7fcbf96c45--vldxn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024e980), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515.1.0-f-8be9c60ab1", "pod":"calico-kube-controllers-7fcbf96c45-vldxn", "timestamp":"2025-12-12 18:20:12.477826382 +0000 UTC"}, Hostname:"ci-4515.1.0-f-8be9c60ab1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:20:13.357556 containerd[1616]: 2025-12-12 18:20:12.478 [INFO][4408] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:20:13.357556 containerd[1616]: 2025-12-12 18:20:12.832 [INFO][4408] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:20:13.357556 containerd[1616]: 2025-12-12 18:20:12.832 [INFO][4408] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-f-8be9c60ab1' Dec 12 18:20:13.357556 containerd[1616]: 2025-12-12 18:20:12.900 [INFO][4408] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1aa28367aae464bf733925c7b7408774576288c519454a427a685a5a5675c977" host="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:20:13.357556 containerd[1616]: 2025-12-12 18:20:12.942 [INFO][4408] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:20:13.357556 containerd[1616]: 2025-12-12 18:20:12.997 [INFO][4408] ipam/ipam.go 511: Trying affinity for 192.168.101.128/26 host="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:20:13.357556 containerd[1616]: 2025-12-12 18:20:13.005 [INFO][4408] ipam/ipam.go 158: Attempting to load block cidr=192.168.101.128/26 host="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:20:13.357556 containerd[1616]: 2025-12-12 18:20:13.017 [INFO][4408] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.101.128/26 host="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:20:13.357556 containerd[1616]: 2025-12-12 18:20:13.021 [INFO][4408] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.101.128/26 handle="k8s-pod-network.1aa28367aae464bf733925c7b7408774576288c519454a427a685a5a5675c977" host="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:20:13.357556 containerd[1616]: 2025-12-12 18:20:13.027 [INFO][4408] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1aa28367aae464bf733925c7b7408774576288c519454a427a685a5a5675c977 Dec 12 18:20:13.357556 containerd[1616]: 2025-12-12 18:20:13.069 [INFO][4408] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.101.128/26 handle="k8s-pod-network.1aa28367aae464bf733925c7b7408774576288c519454a427a685a5a5675c977" host="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:20:13.357556 containerd[1616]: 2025-12-12 18:20:13.200 [INFO][4408] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.101.134/26] block=192.168.101.128/26 handle="k8s-pod-network.1aa28367aae464bf733925c7b7408774576288c519454a427a685a5a5675c977" host="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:20:13.357556 containerd[1616]: 2025-12-12 18:20:13.200 [INFO][4408] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.101.134/26] handle="k8s-pod-network.1aa28367aae464bf733925c7b7408774576288c519454a427a685a5a5675c977" host="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:20:13.357556 containerd[1616]: 2025-12-12 18:20:13.200 [INFO][4408] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:20:13.357556 containerd[1616]: 2025-12-12 18:20:13.201 [INFO][4408] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.101.134/26] IPv6=[] ContainerID="1aa28367aae464bf733925c7b7408774576288c519454a427a685a5a5675c977" HandleID="k8s-pod-network.1aa28367aae464bf733925c7b7408774576288c519454a427a685a5a5675c977" Workload="ci--4515.1.0--f--8be9c60ab1-k8s-calico--kube--controllers--7fcbf96c45--vldxn-eth0" Dec 12 18:20:13.361578 containerd[1616]: 2025-12-12 18:20:13.251 [INFO][4370] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1aa28367aae464bf733925c7b7408774576288c519454a427a685a5a5675c977" Namespace="calico-system" Pod="calico-kube-controllers-7fcbf96c45-vldxn" WorkloadEndpoint="ci--4515.1.0--f--8be9c60ab1-k8s-calico--kube--controllers--7fcbf96c45--vldxn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--f--8be9c60ab1-k8s-calico--kube--controllers--7fcbf96c45--vldxn-eth0", GenerateName:"calico-kube-controllers-7fcbf96c45-", Namespace:"calico-system", SelfLink:"", UID:"88464bd3-9403-4901-97b2-3cffb941f328", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 19, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7fcbf96c45", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-f-8be9c60ab1", ContainerID:"", Pod:"calico-kube-controllers-7fcbf96c45-vldxn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.101.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali24dec623eff", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:20:13.361578 containerd[1616]: 2025-12-12 18:20:13.254 [INFO][4370] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.101.134/32] ContainerID="1aa28367aae464bf733925c7b7408774576288c519454a427a685a5a5675c977" Namespace="calico-system" Pod="calico-kube-controllers-7fcbf96c45-vldxn" WorkloadEndpoint="ci--4515.1.0--f--8be9c60ab1-k8s-calico--kube--controllers--7fcbf96c45--vldxn-eth0" Dec 12 18:20:13.361578 containerd[1616]: 2025-12-12 18:20:13.258 [INFO][4370] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali24dec623eff ContainerID="1aa28367aae464bf733925c7b7408774576288c519454a427a685a5a5675c977" Namespace="calico-system" Pod="calico-kube-controllers-7fcbf96c45-vldxn" WorkloadEndpoint="ci--4515.1.0--f--8be9c60ab1-k8s-calico--kube--controllers--7fcbf96c45--vldxn-eth0" Dec 12 18:20:13.361578 containerd[1616]: 2025-12-12 18:20:13.304 [INFO][4370] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1aa28367aae464bf733925c7b7408774576288c519454a427a685a5a5675c977" Namespace="calico-system" Pod="calico-kube-controllers-7fcbf96c45-vldxn" WorkloadEndpoint="ci--4515.1.0--f--8be9c60ab1-k8s-calico--kube--controllers--7fcbf96c45--vldxn-eth0" Dec 12 18:20:13.361578 containerd[1616]: 2025-12-12 18:20:13.309 [INFO][4370] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1aa28367aae464bf733925c7b7408774576288c519454a427a685a5a5675c977" Namespace="calico-system" Pod="calico-kube-controllers-7fcbf96c45-vldxn" WorkloadEndpoint="ci--4515.1.0--f--8be9c60ab1-k8s-calico--kube--controllers--7fcbf96c45--vldxn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--f--8be9c60ab1-k8s-calico--kube--controllers--7fcbf96c45--vldxn-eth0", GenerateName:"calico-kube-controllers-7fcbf96c45-", Namespace:"calico-system", SelfLink:"", UID:"88464bd3-9403-4901-97b2-3cffb941f328", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 19, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7fcbf96c45", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-f-8be9c60ab1", ContainerID:"1aa28367aae464bf733925c7b7408774576288c519454a427a685a5a5675c977", Pod:"calico-kube-controllers-7fcbf96c45-vldxn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.101.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali24dec623eff", MAC:"6a:98:1b:67:94:5e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:20:13.361578 containerd[1616]: 2025-12-12 18:20:13.343 [INFO][4370] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1aa28367aae464bf733925c7b7408774576288c519454a427a685a5a5675c977" Namespace="calico-system" Pod="calico-kube-controllers-7fcbf96c45-vldxn" WorkloadEndpoint="ci--4515.1.0--f--8be9c60ab1-k8s-calico--kube--controllers--7fcbf96c45--vldxn-eth0" Dec 12 18:20:13.415000 audit: BPF prog-id=224 op=LOAD Dec 12 18:20:13.417000 audit: BPF prog-id=225 op=LOAD Dec 12 18:20:13.417000 audit[4529]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=4515 pid=4529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:13.417000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765373964323235623631383161646461666466653734333232353137 Dec 12 18:20:13.418000 audit: BPF prog-id=225 op=UNLOAD Dec 12 18:20:13.418000 audit[4529]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4515 pid=4529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:13.418000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765373964323235623631383161646461666466653734333232353137 Dec 12 18:20:13.419000 audit: BPF prog-id=226 op=LOAD Dec 12 18:20:13.419000 audit[4529]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=4515 pid=4529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:13.419000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765373964323235623631383161646461666466653734333232353137 Dec 12 18:20:13.419000 audit: BPF prog-id=227 op=LOAD Dec 12 18:20:13.419000 audit[4529]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=4515 pid=4529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:13.419000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765373964323235623631383161646461666466653734333232353137 Dec 12 18:20:13.419000 audit: BPF prog-id=227 op=UNLOAD Dec 12 18:20:13.419000 audit[4529]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4515 pid=4529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:13.419000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765373964323235623631383161646461666466653734333232353137 Dec 12 18:20:13.420000 audit: BPF prog-id=226 op=UNLOAD Dec 12 18:20:13.420000 audit[4529]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4515 pid=4529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:13.420000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765373964323235623631383161646461666466653734333232353137 Dec 12 18:20:13.421000 audit: BPF prog-id=228 op=LOAD Dec 12 18:20:13.421000 audit[4529]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=4515 pid=4529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:13.421000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765373964323235623631383161646461666466653734333232353137 Dec 12 18:20:13.522000 audit[4588]: NETFILTER_CFG table=filter:132 family=2 entries=54 op=nft_register_chain pid=4588 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 18:20:13.522000 audit[4588]: SYSCALL arch=c000003e syscall=46 success=yes exit=29396 a0=3 a1=7ffe575c4d70 a2=0 a3=7ffe575c4d5c items=0 ppid=3949 pid=4588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:13.522000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 18:20:13.532849 kubelet[2819]: E1212 18:20:13.532344 2819 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 12 18:20:13.552000 audit: BPF prog-id=229 op=LOAD Dec 12 18:20:13.556000 audit: BPF prog-id=230 op=LOAD Dec 12 18:20:13.556000 audit[4495]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=4475 pid=4495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:13.556000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662316630623636306135383332613963636439303132653330643738 Dec 12 18:20:13.557000 audit: BPF prog-id=230 op=UNLOAD Dec 12 18:20:13.557000 audit[4495]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4475 pid=4495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:13.557000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662316630623636306135383332613963636439303132653330643738 Dec 12 18:20:13.559000 audit: BPF prog-id=231 op=LOAD Dec 12 18:20:13.559000 audit[4495]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4475 pid=4495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:13.559000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662316630623636306135383332613963636439303132653330643738 Dec 12 18:20:13.559000 audit: BPF prog-id=232 op=LOAD Dec 12 18:20:13.559000 audit[4495]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4475 pid=4495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:13.559000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662316630623636306135383332613963636439303132653330643738 Dec 12 18:20:13.559000 audit: BPF prog-id=232 op=UNLOAD Dec 12 18:20:13.559000 audit[4495]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4475 pid=4495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:13.559000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662316630623636306135383332613963636439303132653330643738 Dec 12 18:20:13.560000 audit: BPF prog-id=231 op=UNLOAD Dec 12 18:20:13.560000 audit[4495]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4475 pid=4495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:13.560000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662316630623636306135383332613963636439303132653330643738 Dec 12 18:20:13.561000 audit: BPF prog-id=233 op=LOAD Dec 12 18:20:13.561000 audit[4495]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=4475 pid=4495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:13.561000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662316630623636306135383332613963636439303132653330643738 Dec 12 18:20:13.577256 containerd[1616]: time="2025-12-12T18:20:13.575756237Z" level=info msg="connecting to shim 1aa28367aae464bf733925c7b7408774576288c519454a427a685a5a5675c977" address="unix:///run/containerd/s/a7daa505347d4c6f5f6e79b0d5991763cd91d051f5a8e7bb373fa870d0d66e82" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:20:13.729331 systemd-networkd[1510]: calia6960c5cee4: Link UP Dec 12 18:20:13.731887 systemd-networkd[1510]: calia6960c5cee4: Gained carrier Dec 12 18:20:13.782920 containerd[1616]: 2025-12-12 18:20:13.384 [INFO][4538] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--f--8be9c60ab1-k8s-coredns--668d6bf9bc--q4kjx-eth0 coredns-668d6bf9bc- kube-system f4950266-b324-4bd8-9271-ead6b00ca6f0 914 0 2025-12-12 18:19:28 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4515.1.0-f-8be9c60ab1 coredns-668d6bf9bc-q4kjx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia6960c5cee4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="ae3c6bd3492ce8cddc9a2e846faad03f58f08fba523559eb15da1b553c656c1e" Namespace="kube-system" Pod="coredns-668d6bf9bc-q4kjx" WorkloadEndpoint="ci--4515.1.0--f--8be9c60ab1-k8s-coredns--668d6bf9bc--q4kjx-" Dec 12 18:20:13.782920 containerd[1616]: 2025-12-12 18:20:13.385 [INFO][4538] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ae3c6bd3492ce8cddc9a2e846faad03f58f08fba523559eb15da1b553c656c1e" Namespace="kube-system" Pod="coredns-668d6bf9bc-q4kjx" WorkloadEndpoint="ci--4515.1.0--f--8be9c60ab1-k8s-coredns--668d6bf9bc--q4kjx-eth0" Dec 12 18:20:13.782920 containerd[1616]: 2025-12-12 18:20:13.522 [INFO][4584] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ae3c6bd3492ce8cddc9a2e846faad03f58f08fba523559eb15da1b553c656c1e" HandleID="k8s-pod-network.ae3c6bd3492ce8cddc9a2e846faad03f58f08fba523559eb15da1b553c656c1e" Workload="ci--4515.1.0--f--8be9c60ab1-k8s-coredns--668d6bf9bc--q4kjx-eth0" Dec 12 18:20:13.782920 containerd[1616]: 2025-12-12 18:20:13.524 [INFO][4584] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ae3c6bd3492ce8cddc9a2e846faad03f58f08fba523559eb15da1b553c656c1e" HandleID="k8s-pod-network.ae3c6bd3492ce8cddc9a2e846faad03f58f08fba523559eb15da1b553c656c1e" Workload="ci--4515.1.0--f--8be9c60ab1-k8s-coredns--668d6bf9bc--q4kjx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003219a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4515.1.0-f-8be9c60ab1", "pod":"coredns-668d6bf9bc-q4kjx", "timestamp":"2025-12-12 18:20:13.522925833 +0000 UTC"}, Hostname:"ci-4515.1.0-f-8be9c60ab1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:20:13.782920 containerd[1616]: 2025-12-12 18:20:13.527 [INFO][4584] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:20:13.782920 containerd[1616]: 2025-12-12 18:20:13.527 [INFO][4584] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:20:13.782920 containerd[1616]: 2025-12-12 18:20:13.527 [INFO][4584] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-f-8be9c60ab1' Dec 12 18:20:13.782920 containerd[1616]: 2025-12-12 18:20:13.563 [INFO][4584] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ae3c6bd3492ce8cddc9a2e846faad03f58f08fba523559eb15da1b553c656c1e" host="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:20:13.782920 containerd[1616]: 2025-12-12 18:20:13.600 [INFO][4584] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:20:13.782920 containerd[1616]: 2025-12-12 18:20:13.640 [INFO][4584] ipam/ipam.go 511: Trying affinity for 192.168.101.128/26 host="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:20:13.782920 containerd[1616]: 2025-12-12 18:20:13.646 [INFO][4584] ipam/ipam.go 158: Attempting to load block cidr=192.168.101.128/26 host="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:20:13.782920 containerd[1616]: 2025-12-12 18:20:13.658 [INFO][4584] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.101.128/26 host="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:20:13.782920 containerd[1616]: 2025-12-12 18:20:13.659 [INFO][4584] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.101.128/26 handle="k8s-pod-network.ae3c6bd3492ce8cddc9a2e846faad03f58f08fba523559eb15da1b553c656c1e" host="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:20:13.782920 containerd[1616]: 2025-12-12 18:20:13.662 [INFO][4584] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ae3c6bd3492ce8cddc9a2e846faad03f58f08fba523559eb15da1b553c656c1e Dec 12 18:20:13.782920 containerd[1616]: 2025-12-12 18:20:13.686 [INFO][4584] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.101.128/26 handle="k8s-pod-network.ae3c6bd3492ce8cddc9a2e846faad03f58f08fba523559eb15da1b553c656c1e" host="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:20:13.782920 containerd[1616]: 2025-12-12 18:20:13.702 [INFO][4584] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.101.135/26] block=192.168.101.128/26 handle="k8s-pod-network.ae3c6bd3492ce8cddc9a2e846faad03f58f08fba523559eb15da1b553c656c1e" host="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:20:13.782920 containerd[1616]: 2025-12-12 18:20:13.704 [INFO][4584] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.101.135/26] handle="k8s-pod-network.ae3c6bd3492ce8cddc9a2e846faad03f58f08fba523559eb15da1b553c656c1e" host="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:20:13.782920 containerd[1616]: 2025-12-12 18:20:13.704 [INFO][4584] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:20:13.782920 containerd[1616]: 2025-12-12 18:20:13.704 [INFO][4584] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.101.135/26] IPv6=[] ContainerID="ae3c6bd3492ce8cddc9a2e846faad03f58f08fba523559eb15da1b553c656c1e" HandleID="k8s-pod-network.ae3c6bd3492ce8cddc9a2e846faad03f58f08fba523559eb15da1b553c656c1e" Workload="ci--4515.1.0--f--8be9c60ab1-k8s-coredns--668d6bf9bc--q4kjx-eth0" Dec 12 18:20:13.789014 containerd[1616]: 2025-12-12 18:20:13.718 [INFO][4538] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ae3c6bd3492ce8cddc9a2e846faad03f58f08fba523559eb15da1b553c656c1e" Namespace="kube-system" Pod="coredns-668d6bf9bc-q4kjx" WorkloadEndpoint="ci--4515.1.0--f--8be9c60ab1-k8s-coredns--668d6bf9bc--q4kjx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--f--8be9c60ab1-k8s-coredns--668d6bf9bc--q4kjx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"f4950266-b324-4bd8-9271-ead6b00ca6f0", ResourceVersion:"914", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 19, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-f-8be9c60ab1", ContainerID:"", Pod:"coredns-668d6bf9bc-q4kjx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.101.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia6960c5cee4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:20:13.789014 containerd[1616]: 2025-12-12 18:20:13.718 [INFO][4538] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.101.135/32] ContainerID="ae3c6bd3492ce8cddc9a2e846faad03f58f08fba523559eb15da1b553c656c1e" Namespace="kube-system" Pod="coredns-668d6bf9bc-q4kjx" WorkloadEndpoint="ci--4515.1.0--f--8be9c60ab1-k8s-coredns--668d6bf9bc--q4kjx-eth0" Dec 12 18:20:13.789014 containerd[1616]: 2025-12-12 18:20:13.719 [INFO][4538] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia6960c5cee4 ContainerID="ae3c6bd3492ce8cddc9a2e846faad03f58f08fba523559eb15da1b553c656c1e" Namespace="kube-system" Pod="coredns-668d6bf9bc-q4kjx" WorkloadEndpoint="ci--4515.1.0--f--8be9c60ab1-k8s-coredns--668d6bf9bc--q4kjx-eth0" Dec 12 18:20:13.789014 containerd[1616]: 2025-12-12 18:20:13.732 [INFO][4538] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ae3c6bd3492ce8cddc9a2e846faad03f58f08fba523559eb15da1b553c656c1e" Namespace="kube-system" Pod="coredns-668d6bf9bc-q4kjx" WorkloadEndpoint="ci--4515.1.0--f--8be9c60ab1-k8s-coredns--668d6bf9bc--q4kjx-eth0" Dec 12 18:20:13.789014 containerd[1616]: 2025-12-12 18:20:13.734 [INFO][4538] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ae3c6bd3492ce8cddc9a2e846faad03f58f08fba523559eb15da1b553c656c1e" Namespace="kube-system" Pod="coredns-668d6bf9bc-q4kjx" WorkloadEndpoint="ci--4515.1.0--f--8be9c60ab1-k8s-coredns--668d6bf9bc--q4kjx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--f--8be9c60ab1-k8s-coredns--668d6bf9bc--q4kjx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"f4950266-b324-4bd8-9271-ead6b00ca6f0", ResourceVersion:"914", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 19, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-f-8be9c60ab1", ContainerID:"ae3c6bd3492ce8cddc9a2e846faad03f58f08fba523559eb15da1b553c656c1e", Pod:"coredns-668d6bf9bc-q4kjx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.101.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia6960c5cee4", MAC:"96:3f:39:08:98:06", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:20:13.789014 containerd[1616]: 2025-12-12 18:20:13.759 [INFO][4538] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ae3c6bd3492ce8cddc9a2e846faad03f58f08fba523559eb15da1b553c656c1e" Namespace="kube-system" Pod="coredns-668d6bf9bc-q4kjx" WorkloadEndpoint="ci--4515.1.0--f--8be9c60ab1-k8s-coredns--668d6bf9bc--q4kjx-eth0" Dec 12 18:20:13.817204 systemd[1]: Started cri-containerd-1aa28367aae464bf733925c7b7408774576288c519454a427a685a5a5675c977.scope - libcontainer container 1aa28367aae464bf733925c7b7408774576288c519454a427a685a5a5675c977. Dec 12 18:20:13.910262 containerd[1616]: time="2025-12-12T18:20:13.910161386Z" level=info msg="connecting to shim ae3c6bd3492ce8cddc9a2e846faad03f58f08fba523559eb15da1b553c656c1e" address="unix:///run/containerd/s/a07c9a47b262d8e961f900505cdbf9da2a7a9107f8e188865705926c04c575a5" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:20:13.919798 containerd[1616]: time="2025-12-12T18:20:13.919626422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9bb959468-57r44,Uid:ab0029a5-8491-42f1-b060-fef0c0422b49,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"7e79d225b6181addafdfe74322517b392eca6a4a555f6b0d5a4926f20489855b\"" Dec 12 18:20:13.926510 containerd[1616]: time="2025-12-12T18:20:13.926338971Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:20:13.943303 systemd-networkd[1510]: cali9107fa1cf7a: Gained IPv6LL Dec 12 18:20:13.945521 systemd-networkd[1510]: calid06e99e3bbc: Link UP Dec 12 18:20:13.949012 systemd-networkd[1510]: calid06e99e3bbc: Gained carrier Dec 12 18:20:14.007617 systemd-networkd[1510]: cali9116ed1bafd: Gained IPv6LL Dec 12 18:20:14.057000 audit: BPF prog-id=234 op=LOAD Dec 12 18:20:14.060000 audit: BPF prog-id=235 op=LOAD Dec 12 18:20:14.060000 audit[4620]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4604 pid=4620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:14.060000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161613238333637616165343634626637333339323563376237343038 Dec 12 18:20:14.060000 audit: BPF prog-id=235 op=UNLOAD Dec 12 18:20:14.060000 audit[4620]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4604 pid=4620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:14.060000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161613238333637616165343634626637333339323563376237343038 Dec 12 18:20:14.064290 containerd[1616]: time="2025-12-12T18:20:14.063930977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f9c8d5fbb-p96pq,Uid:ed1bf1c8-646f-4c33-9642-90a577c1d786,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"1f55a01a3f92d6320d826cf732eb397e7decb54be209fc2c5f3af55a9dedc265\"" Dec 12 18:20:14.064000 audit: BPF prog-id=236 op=LOAD Dec 12 18:20:14.064000 audit[4620]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4604 pid=4620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:14.064000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161613238333637616165343634626637333339323563376237343038 Dec 12 18:20:14.065000 audit: BPF prog-id=237 op=LOAD Dec 12 18:20:14.065000 audit[4620]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4604 pid=4620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:14.065000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161613238333637616165343634626637333339323563376237343038 Dec 12 18:20:14.066000 audit: BPF prog-id=237 op=UNLOAD Dec 12 18:20:14.066000 audit[4620]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4604 pid=4620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:14.066000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161613238333637616165343634626637333339323563376237343038 Dec 12 18:20:14.066000 audit: BPF prog-id=236 op=UNLOAD Dec 12 18:20:14.066000 audit[4620]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4604 pid=4620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:14.066000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161613238333637616165343634626637333339323563376237343038 Dec 12 18:20:14.069449 containerd[1616]: 2025-12-12 18:20:13.540 [INFO][4535] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--f--8be9c60ab1-k8s-goldmane--666569f655--h7lvc-eth0 goldmane-666569f655- calico-system c57efa3a-e82c-436b-9c07-8cf6921dcd5d 904 0 2025-12-12 18:19:44 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4515.1.0-f-8be9c60ab1 goldmane-666569f655-h7lvc eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calid06e99e3bbc [] [] }} ContainerID="8a7e266c90da4fd19b0165a3751ebe16a1314d830ee210bc436e1043b31a2ed8" Namespace="calico-system" Pod="goldmane-666569f655-h7lvc" WorkloadEndpoint="ci--4515.1.0--f--8be9c60ab1-k8s-goldmane--666569f655--h7lvc-" Dec 12 18:20:14.069449 containerd[1616]: 2025-12-12 18:20:13.544 [INFO][4535] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8a7e266c90da4fd19b0165a3751ebe16a1314d830ee210bc436e1043b31a2ed8" Namespace="calico-system" Pod="goldmane-666569f655-h7lvc" WorkloadEndpoint="ci--4515.1.0--f--8be9c60ab1-k8s-goldmane--666569f655--h7lvc-eth0" Dec 12 18:20:14.069449 containerd[1616]: 2025-12-12 18:20:13.687 [INFO][4610] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8a7e266c90da4fd19b0165a3751ebe16a1314d830ee210bc436e1043b31a2ed8" HandleID="k8s-pod-network.8a7e266c90da4fd19b0165a3751ebe16a1314d830ee210bc436e1043b31a2ed8" Workload="ci--4515.1.0--f--8be9c60ab1-k8s-goldmane--666569f655--h7lvc-eth0" Dec 12 18:20:14.069449 containerd[1616]: 2025-12-12 18:20:13.689 [INFO][4610] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8a7e266c90da4fd19b0165a3751ebe16a1314d830ee210bc436e1043b31a2ed8" HandleID="k8s-pod-network.8a7e266c90da4fd19b0165a3751ebe16a1314d830ee210bc436e1043b31a2ed8" Workload="ci--4515.1.0--f--8be9c60ab1-k8s-goldmane--666569f655--h7lvc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5e20), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515.1.0-f-8be9c60ab1", "pod":"goldmane-666569f655-h7lvc", "timestamp":"2025-12-12 18:20:13.68712621 +0000 UTC"}, Hostname:"ci-4515.1.0-f-8be9c60ab1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:20:14.069449 containerd[1616]: 2025-12-12 18:20:13.689 [INFO][4610] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:20:14.069449 containerd[1616]: 2025-12-12 18:20:13.704 [INFO][4610] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:20:14.069449 containerd[1616]: 2025-12-12 18:20:13.704 [INFO][4610] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-f-8be9c60ab1' Dec 12 18:20:14.069449 containerd[1616]: 2025-12-12 18:20:13.728 [INFO][4610] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8a7e266c90da4fd19b0165a3751ebe16a1314d830ee210bc436e1043b31a2ed8" host="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:20:14.069449 containerd[1616]: 2025-12-12 18:20:13.775 [INFO][4610] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:20:14.069449 containerd[1616]: 2025-12-12 18:20:13.815 [INFO][4610] ipam/ipam.go 511: Trying affinity for 192.168.101.128/26 host="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:20:14.069449 containerd[1616]: 2025-12-12 18:20:13.824 [INFO][4610] ipam/ipam.go 158: Attempting to load block cidr=192.168.101.128/26 host="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:20:14.069449 containerd[1616]: 2025-12-12 18:20:13.864 [INFO][4610] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.101.128/26 host="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:20:14.069449 containerd[1616]: 2025-12-12 18:20:13.864 [INFO][4610] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.101.128/26 handle="k8s-pod-network.8a7e266c90da4fd19b0165a3751ebe16a1314d830ee210bc436e1043b31a2ed8" host="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:20:14.069449 containerd[1616]: 2025-12-12 18:20:13.870 [INFO][4610] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8a7e266c90da4fd19b0165a3751ebe16a1314d830ee210bc436e1043b31a2ed8 Dec 12 18:20:14.069449 containerd[1616]: 2025-12-12 18:20:13.880 [INFO][4610] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.101.128/26 handle="k8s-pod-network.8a7e266c90da4fd19b0165a3751ebe16a1314d830ee210bc436e1043b31a2ed8" host="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:20:14.069449 containerd[1616]: 2025-12-12 18:20:13.903 [INFO][4610] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.101.136/26] block=192.168.101.128/26 handle="k8s-pod-network.8a7e266c90da4fd19b0165a3751ebe16a1314d830ee210bc436e1043b31a2ed8" host="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:20:14.069449 containerd[1616]: 2025-12-12 18:20:13.904 [INFO][4610] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.101.136/26] handle="k8s-pod-network.8a7e266c90da4fd19b0165a3751ebe16a1314d830ee210bc436e1043b31a2ed8" host="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:20:14.069449 containerd[1616]: 2025-12-12 18:20:13.904 [INFO][4610] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:20:14.069449 containerd[1616]: 2025-12-12 18:20:13.904 [INFO][4610] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.101.136/26] IPv6=[] ContainerID="8a7e266c90da4fd19b0165a3751ebe16a1314d830ee210bc436e1043b31a2ed8" HandleID="k8s-pod-network.8a7e266c90da4fd19b0165a3751ebe16a1314d830ee210bc436e1043b31a2ed8" Workload="ci--4515.1.0--f--8be9c60ab1-k8s-goldmane--666569f655--h7lvc-eth0" Dec 12 18:20:14.071615 containerd[1616]: 2025-12-12 18:20:13.914 [INFO][4535] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8a7e266c90da4fd19b0165a3751ebe16a1314d830ee210bc436e1043b31a2ed8" Namespace="calico-system" Pod="goldmane-666569f655-h7lvc" WorkloadEndpoint="ci--4515.1.0--f--8be9c60ab1-k8s-goldmane--666569f655--h7lvc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--f--8be9c60ab1-k8s-goldmane--666569f655--h7lvc-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"c57efa3a-e82c-436b-9c07-8cf6921dcd5d", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 19, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-f-8be9c60ab1", ContainerID:"", Pod:"goldmane-666569f655-h7lvc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.101.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid06e99e3bbc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:20:14.071615 containerd[1616]: 2025-12-12 18:20:13.916 [INFO][4535] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.101.136/32] ContainerID="8a7e266c90da4fd19b0165a3751ebe16a1314d830ee210bc436e1043b31a2ed8" Namespace="calico-system" Pod="goldmane-666569f655-h7lvc" WorkloadEndpoint="ci--4515.1.0--f--8be9c60ab1-k8s-goldmane--666569f655--h7lvc-eth0" Dec 12 18:20:14.071615 containerd[1616]: 2025-12-12 18:20:13.916 [INFO][4535] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid06e99e3bbc ContainerID="8a7e266c90da4fd19b0165a3751ebe16a1314d830ee210bc436e1043b31a2ed8" Namespace="calico-system" Pod="goldmane-666569f655-h7lvc" WorkloadEndpoint="ci--4515.1.0--f--8be9c60ab1-k8s-goldmane--666569f655--h7lvc-eth0" Dec 12 18:20:14.071615 containerd[1616]: 2025-12-12 18:20:13.955 [INFO][4535] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8a7e266c90da4fd19b0165a3751ebe16a1314d830ee210bc436e1043b31a2ed8" Namespace="calico-system" Pod="goldmane-666569f655-h7lvc" WorkloadEndpoint="ci--4515.1.0--f--8be9c60ab1-k8s-goldmane--666569f655--h7lvc-eth0" Dec 12 18:20:14.071615 containerd[1616]: 2025-12-12 18:20:13.962 [INFO][4535] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8a7e266c90da4fd19b0165a3751ebe16a1314d830ee210bc436e1043b31a2ed8" Namespace="calico-system" Pod="goldmane-666569f655-h7lvc" WorkloadEndpoint="ci--4515.1.0--f--8be9c60ab1-k8s-goldmane--666569f655--h7lvc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--f--8be9c60ab1-k8s-goldmane--666569f655--h7lvc-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"c57efa3a-e82c-436b-9c07-8cf6921dcd5d", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 19, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-f-8be9c60ab1", ContainerID:"8a7e266c90da4fd19b0165a3751ebe16a1314d830ee210bc436e1043b31a2ed8", Pod:"goldmane-666569f655-h7lvc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.101.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid06e99e3bbc", MAC:"56:3b:94:de:c0:34", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:20:14.071615 containerd[1616]: 2025-12-12 18:20:14.038 [INFO][4535] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8a7e266c90da4fd19b0165a3751ebe16a1314d830ee210bc436e1043b31a2ed8" Namespace="calico-system" Pod="goldmane-666569f655-h7lvc" WorkloadEndpoint="ci--4515.1.0--f--8be9c60ab1-k8s-goldmane--666569f655--h7lvc-eth0" Dec 12 18:20:14.066000 audit: BPF prog-id=238 op=LOAD Dec 12 18:20:14.066000 audit[4620]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4604 pid=4620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:14.066000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161613238333637616165343634626637333339323563376237343038 Dec 12 18:20:14.102913 systemd[1]: Started cri-containerd-ae3c6bd3492ce8cddc9a2e846faad03f58f08fba523559eb15da1b553c656c1e.scope - libcontainer container ae3c6bd3492ce8cddc9a2e846faad03f58f08fba523559eb15da1b553c656c1e. Dec 12 18:20:14.128439 containerd[1616]: time="2025-12-12T18:20:14.128369103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5kbmx,Uid:3530bcd5-7985-42ba-8587-569180a87a41,Namespace:calico-system,Attempt:0,}" Dec 12 18:20:14.138000 audit[4676]: NETFILTER_CFG table=filter:133 family=2 entries=108 op=nft_register_chain pid=4676 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 18:20:14.138000 audit[4676]: SYSCALL arch=c000003e syscall=46 success=yes exit=62780 a0=3 a1=7ffec1179970 a2=0 a3=7ffec117995c items=0 ppid=3949 pid=4676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:14.138000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 18:20:14.187000 audit: BPF prog-id=239 op=LOAD Dec 12 18:20:14.191000 audit: BPF prog-id=240 op=LOAD Dec 12 18:20:14.191000 audit[4678]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000212238 a2=98 a3=0 items=0 ppid=4664 pid=4678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:14.191000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165336336626433343932636538636464633961326538343666616164 Dec 12 18:20:14.194000 audit: BPF prog-id=240 op=UNLOAD Dec 12 18:20:14.194000 audit[4678]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4664 pid=4678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:14.194000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165336336626433343932636538636464633961326538343666616164 Dec 12 18:20:14.195000 audit: BPF prog-id=241 op=LOAD Dec 12 18:20:14.195000 audit[4678]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000212488 a2=98 a3=0 items=0 ppid=4664 pid=4678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:14.195000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165336336626433343932636538636464633961326538343666616164 Dec 12 18:20:14.197000 audit: BPF prog-id=242 op=LOAD Dec 12 18:20:14.197000 audit[4678]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000212218 a2=98 a3=0 items=0 ppid=4664 pid=4678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:14.197000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165336336626433343932636538636464633961326538343666616164 Dec 12 18:20:14.197000 audit: BPF prog-id=242 op=UNLOAD Dec 12 18:20:14.197000 audit[4678]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4664 pid=4678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:14.197000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165336336626433343932636538636464633961326538343666616164 Dec 12 18:20:14.197000 audit: BPF prog-id=241 op=UNLOAD Dec 12 18:20:14.197000 audit[4678]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4664 pid=4678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:14.197000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165336336626433343932636538636464633961326538343666616164 Dec 12 18:20:14.199000 audit: BPF prog-id=243 op=LOAD Dec 12 18:20:14.199000 audit[4678]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0002126e8 a2=98 a3=0 items=0 ppid=4664 pid=4678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:14.199000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165336336626433343932636538636464633961326538343666616164 Dec 12 18:20:14.202931 containerd[1616]: time="2025-12-12T18:20:14.202870938Z" level=info msg="connecting to shim 8a7e266c90da4fd19b0165a3751ebe16a1314d830ee210bc436e1043b31a2ed8" address="unix:///run/containerd/s/8ca2f22afd3259681f357d5e323190906fa7815cee9f4d58b585863e33d6fc21" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:20:14.326783 systemd-networkd[1510]: cali24dec623eff: Gained IPv6LL Dec 12 18:20:14.341590 containerd[1616]: time="2025-12-12T18:20:14.340347951Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:20:14.347498 containerd[1616]: time="2025-12-12T18:20:14.345372180Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:20:14.347498 containerd[1616]: time="2025-12-12T18:20:14.345596508Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 18:20:14.347698 kubelet[2819]: E1212 18:20:14.346652 2819 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:20:14.347698 kubelet[2819]: E1212 18:20:14.346721 2819 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:20:14.347698 kubelet[2819]: E1212 18:20:14.347043 2819 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f8d5k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-9bb959468-57r44_calico-apiserver(ab0029a5-8491-42f1-b060-fef0c0422b49): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:20:14.350608 containerd[1616]: time="2025-12-12T18:20:14.347858155Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:20:14.351174 systemd[1]: Started cri-containerd-8a7e266c90da4fd19b0165a3751ebe16a1314d830ee210bc436e1043b31a2ed8.scope - libcontainer container 8a7e266c90da4fd19b0165a3751ebe16a1314d830ee210bc436e1043b31a2ed8. Dec 12 18:20:14.353237 kubelet[2819]: E1212 18:20:14.352732 2819 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9bb959468-57r44" podUID="ab0029a5-8491-42f1-b060-fef0c0422b49" Dec 12 18:20:14.507548 containerd[1616]: time="2025-12-12T18:20:14.505831057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9bb959468-v58pb,Uid:f4c646c7-47f1-433d-b7c4-005cccecda6a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"6b1f0b660a5832a9ccd9012e30d784f4c5ae32e981850f5bc6fa565747d8e476\"" Dec 12 18:20:14.520000 audit: BPF prog-id=244 op=LOAD Dec 12 18:20:14.523000 audit: BPF prog-id=245 op=LOAD Dec 12 18:20:14.523000 audit[4735]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4724 pid=4735 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:14.523000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861376532363663393064613466643139623031363561333735316562 Dec 12 18:20:14.523000 audit: BPF prog-id=245 op=UNLOAD Dec 12 18:20:14.523000 audit[4735]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4724 pid=4735 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:14.523000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861376532363663393064613466643139623031363561333735316562 Dec 12 18:20:14.524000 audit: BPF prog-id=246 op=LOAD Dec 12 18:20:14.524000 audit[4735]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4724 pid=4735 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:14.524000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861376532363663393064613466643139623031363561333735316562 Dec 12 18:20:14.524000 audit: BPF prog-id=247 op=LOAD Dec 12 18:20:14.524000 audit[4735]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4724 pid=4735 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:14.524000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861376532363663393064613466643139623031363561333735316562 Dec 12 18:20:14.524000 audit: BPF prog-id=247 op=UNLOAD Dec 12 18:20:14.524000 audit[4735]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4724 pid=4735 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:14.524000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861376532363663393064613466643139623031363561333735316562 Dec 12 18:20:14.525000 audit: BPF prog-id=246 op=UNLOAD Dec 12 18:20:14.525000 audit[4735]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4724 pid=4735 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:14.525000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861376532363663393064613466643139623031363561333735316562 Dec 12 18:20:14.525000 audit: BPF prog-id=248 op=LOAD Dec 12 18:20:14.525000 audit[4735]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4724 pid=4735 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:14.525000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861376532363663393064613466643139623031363561333735316562 Dec 12 18:20:14.544987 containerd[1616]: time="2025-12-12T18:20:14.544688172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7fcbf96c45-vldxn,Uid:88464bd3-9403-4901-97b2-3cffb941f328,Namespace:calico-system,Attempt:0,} returns sandbox id \"1aa28367aae464bf733925c7b7408774576288c519454a427a685a5a5675c977\"" Dec 12 18:20:14.547000 audit[4785]: NETFILTER_CFG table=filter:134 family=2 entries=84 op=nft_register_chain pid=4785 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 18:20:14.547000 audit[4785]: SYSCALL arch=c000003e syscall=46 success=yes exit=44984 a0=3 a1=7fff40559610 a2=0 a3=7fff405595fc items=0 ppid=3949 pid=4785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:14.547000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 18:20:14.553112 containerd[1616]: time="2025-12-12T18:20:14.552895803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-q4kjx,Uid:f4950266-b324-4bd8-9271-ead6b00ca6f0,Namespace:kube-system,Attempt:0,} returns sandbox id \"ae3c6bd3492ce8cddc9a2e846faad03f58f08fba523559eb15da1b553c656c1e\"" Dec 12 18:20:14.556409 kubelet[2819]: E1212 18:20:14.556258 2819 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 12 18:20:14.567148 kubelet[2819]: E1212 18:20:14.566727 2819 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9bb959468-57r44" podUID="ab0029a5-8491-42f1-b060-fef0c0422b49" Dec 12 18:20:14.581909 containerd[1616]: time="2025-12-12T18:20:14.580834440Z" level=info msg="CreateContainer within sandbox \"ae3c6bd3492ce8cddc9a2e846faad03f58f08fba523559eb15da1b553c656c1e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 18:20:14.625101 containerd[1616]: time="2025-12-12T18:20:14.624227605Z" level=info msg="Container 4507812e4b67aa5073e2df904e7f50d3cd22125b20e363c2de3ca3d8464361e0: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:20:14.640629 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2946480903.mount: Deactivated successfully. Dec 12 18:20:14.647110 systemd-networkd[1510]: cali1c24635cc5e: Gained IPv6LL Dec 12 18:20:14.677824 containerd[1616]: time="2025-12-12T18:20:14.677756579Z" level=info msg="CreateContainer within sandbox \"ae3c6bd3492ce8cddc9a2e846faad03f58f08fba523559eb15da1b553c656c1e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4507812e4b67aa5073e2df904e7f50d3cd22125b20e363c2de3ca3d8464361e0\"" Dec 12 18:20:14.680958 containerd[1616]: time="2025-12-12T18:20:14.680851560Z" level=info msg="StartContainer for \"4507812e4b67aa5073e2df904e7f50d3cd22125b20e363c2de3ca3d8464361e0\"" Dec 12 18:20:14.684166 containerd[1616]: time="2025-12-12T18:20:14.684118088Z" level=info msg="connecting to shim 4507812e4b67aa5073e2df904e7f50d3cd22125b20e363c2de3ca3d8464361e0" address="unix:///run/containerd/s/a07c9a47b262d8e961f900505cdbf9da2a7a9107f8e188865705926c04c575a5" protocol=ttrpc version=3 Dec 12 18:20:14.703000 audit[4790]: NETFILTER_CFG table=filter:135 family=2 entries=14 op=nft_register_rule pid=4790 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:20:14.703000 audit[4790]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffda569e450 a2=0 a3=7ffda569e43c items=0 ppid=2939 pid=4790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:14.703000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:20:14.710000 audit[4790]: NETFILTER_CFG table=nat:136 family=2 entries=20 op=nft_register_rule pid=4790 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:20:14.710000 audit[4790]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffda569e450 a2=0 a3=7ffda569e43c items=0 ppid=2939 pid=4790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:14.710000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:20:14.758992 systemd[1]: Started cri-containerd-4507812e4b67aa5073e2df904e7f50d3cd22125b20e363c2de3ca3d8464361e0.scope - libcontainer container 4507812e4b67aa5073e2df904e7f50d3cd22125b20e363c2de3ca3d8464361e0. Dec 12 18:20:14.835264 systemd-networkd[1510]: cali07e2f09e035: Link UP Dec 12 18:20:14.837000 audit: BPF prog-id=249 op=LOAD Dec 12 18:20:14.839834 containerd[1616]: time="2025-12-12T18:20:14.838426622Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:20:14.840000 audit: BPF prog-id=250 op=LOAD Dec 12 18:20:14.840000 audit[4791]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000228238 a2=98 a3=0 items=0 ppid=4664 pid=4791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:14.840000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435303738313265346236376161353037336532646639303465376635 Dec 12 18:20:14.840000 audit: BPF prog-id=250 op=UNLOAD Dec 12 18:20:14.841085 systemd-networkd[1510]: cali07e2f09e035: Gained carrier Dec 12 18:20:14.840000 audit[4791]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4664 pid=4791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:14.840000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435303738313265346236376161353037336532646639303465376635 Dec 12 18:20:14.844000 audit: BPF prog-id=251 op=LOAD Dec 12 18:20:14.844000 audit[4791]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000228488 a2=98 a3=0 items=0 ppid=4664 pid=4791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:14.844000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435303738313265346236376161353037336532646639303465376635 Dec 12 18:20:14.844000 audit: BPF prog-id=252 op=LOAD Dec 12 18:20:14.844000 audit[4791]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000228218 a2=98 a3=0 items=0 ppid=4664 pid=4791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:14.844000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435303738313265346236376161353037336532646639303465376635 Dec 12 18:20:14.844000 audit: BPF prog-id=252 op=UNLOAD Dec 12 18:20:14.844000 audit[4791]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4664 pid=4791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:14.844000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435303738313265346236376161353037336532646639303465376635 Dec 12 18:20:14.845000 audit: BPF prog-id=251 op=UNLOAD Dec 12 18:20:14.845000 audit[4791]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4664 pid=4791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:14.845000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435303738313265346236376161353037336532646639303465376635 Dec 12 18:20:14.845000 audit: BPF prog-id=253 op=LOAD Dec 12 18:20:14.851858 containerd[1616]: time="2025-12-12T18:20:14.851571314Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:20:14.851858 containerd[1616]: time="2025-12-12T18:20:14.851691820Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 18:20:14.845000 audit[4791]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0002286e8 a2=98 a3=0 items=0 ppid=4664 pid=4791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:14.845000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435303738313265346236376161353037336532646639303465376635 Dec 12 18:20:14.853518 kubelet[2819]: E1212 18:20:14.853416 2819 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:20:14.853805 kubelet[2819]: E1212 18:20:14.853532 2819 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:20:14.854498 kubelet[2819]: E1212 18:20:14.854127 2819 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ftxcm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6f9c8d5fbb-p96pq_calico-apiserver(ed1bf1c8-646f-4c33-9642-90a577c1d786): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:20:14.856582 kubelet[2819]: E1212 18:20:14.856527 2819 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f9c8d5fbb-p96pq" podUID="ed1bf1c8-646f-4c33-9642-90a577c1d786" Dec 12 18:20:14.857921 containerd[1616]: time="2025-12-12T18:20:14.857870970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-h7lvc,Uid:c57efa3a-e82c-436b-9c07-8cf6921dcd5d,Namespace:calico-system,Attempt:0,} returns sandbox id \"8a7e266c90da4fd19b0165a3751ebe16a1314d830ee210bc436e1043b31a2ed8\"" Dec 12 18:20:14.860868 containerd[1616]: time="2025-12-12T18:20:14.860812902Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:20:14.907961 containerd[1616]: 2025-12-12 18:20:14.343 [INFO][4714] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--f--8be9c60ab1-k8s-csi--node--driver--5kbmx-eth0 csi-node-driver- calico-system 3530bcd5-7985-42ba-8587-569180a87a41 783 0 2025-12-12 18:19:47 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4515.1.0-f-8be9c60ab1 csi-node-driver-5kbmx eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali07e2f09e035 [] [] }} ContainerID="a202591647e174ded5ac96a92ad10a755f60308c675b20ee06956fe7a04c7ade" Namespace="calico-system" Pod="csi-node-driver-5kbmx" WorkloadEndpoint="ci--4515.1.0--f--8be9c60ab1-k8s-csi--node--driver--5kbmx-" Dec 12 18:20:14.907961 containerd[1616]: 2025-12-12 18:20:14.361 [INFO][4714] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a202591647e174ded5ac96a92ad10a755f60308c675b20ee06956fe7a04c7ade" Namespace="calico-system" Pod="csi-node-driver-5kbmx" WorkloadEndpoint="ci--4515.1.0--f--8be9c60ab1-k8s-csi--node--driver--5kbmx-eth0" Dec 12 18:20:14.907961 containerd[1616]: 2025-12-12 18:20:14.639 [INFO][4762] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a202591647e174ded5ac96a92ad10a755f60308c675b20ee06956fe7a04c7ade" HandleID="k8s-pod-network.a202591647e174ded5ac96a92ad10a755f60308c675b20ee06956fe7a04c7ade" Workload="ci--4515.1.0--f--8be9c60ab1-k8s-csi--node--driver--5kbmx-eth0" Dec 12 18:20:14.907961 containerd[1616]: 2025-12-12 18:20:14.640 [INFO][4762] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a202591647e174ded5ac96a92ad10a755f60308c675b20ee06956fe7a04c7ade" HandleID="k8s-pod-network.a202591647e174ded5ac96a92ad10a755f60308c675b20ee06956fe7a04c7ade" Workload="ci--4515.1.0--f--8be9c60ab1-k8s-csi--node--driver--5kbmx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e6550), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515.1.0-f-8be9c60ab1", "pod":"csi-node-driver-5kbmx", "timestamp":"2025-12-12 18:20:14.639798648 +0000 UTC"}, Hostname:"ci-4515.1.0-f-8be9c60ab1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:20:14.907961 containerd[1616]: 2025-12-12 18:20:14.641 [INFO][4762] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:20:14.907961 containerd[1616]: 2025-12-12 18:20:14.642 [INFO][4762] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:20:14.907961 containerd[1616]: 2025-12-12 18:20:14.642 [INFO][4762] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-f-8be9c60ab1' Dec 12 18:20:14.907961 containerd[1616]: 2025-12-12 18:20:14.673 [INFO][4762] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a202591647e174ded5ac96a92ad10a755f60308c675b20ee06956fe7a04c7ade" host="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:20:14.907961 containerd[1616]: 2025-12-12 18:20:14.696 [INFO][4762] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:20:14.907961 containerd[1616]: 2025-12-12 18:20:14.726 [INFO][4762] ipam/ipam.go 511: Trying affinity for 192.168.101.128/26 host="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:20:14.907961 containerd[1616]: 2025-12-12 18:20:14.736 [INFO][4762] ipam/ipam.go 158: Attempting to load block cidr=192.168.101.128/26 host="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:20:14.907961 containerd[1616]: 2025-12-12 18:20:14.753 [INFO][4762] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.101.128/26 host="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:20:14.907961 containerd[1616]: 2025-12-12 18:20:14.753 [INFO][4762] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.101.128/26 handle="k8s-pod-network.a202591647e174ded5ac96a92ad10a755f60308c675b20ee06956fe7a04c7ade" host="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:20:14.907961 containerd[1616]: 2025-12-12 18:20:14.757 [INFO][4762] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a202591647e174ded5ac96a92ad10a755f60308c675b20ee06956fe7a04c7ade Dec 12 18:20:14.907961 containerd[1616]: 2025-12-12 18:20:14.771 [INFO][4762] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.101.128/26 handle="k8s-pod-network.a202591647e174ded5ac96a92ad10a755f60308c675b20ee06956fe7a04c7ade" host="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:20:14.907961 containerd[1616]: 2025-12-12 18:20:14.798 [INFO][4762] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.101.137/26] block=192.168.101.128/26 handle="k8s-pod-network.a202591647e174ded5ac96a92ad10a755f60308c675b20ee06956fe7a04c7ade" host="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:20:14.907961 containerd[1616]: 2025-12-12 18:20:14.798 [INFO][4762] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.101.137/26] handle="k8s-pod-network.a202591647e174ded5ac96a92ad10a755f60308c675b20ee06956fe7a04c7ade" host="ci-4515.1.0-f-8be9c60ab1" Dec 12 18:20:14.907961 containerd[1616]: 2025-12-12 18:20:14.799 [INFO][4762] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:20:14.907961 containerd[1616]: 2025-12-12 18:20:14.799 [INFO][4762] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.101.137/26] IPv6=[] ContainerID="a202591647e174ded5ac96a92ad10a755f60308c675b20ee06956fe7a04c7ade" HandleID="k8s-pod-network.a202591647e174ded5ac96a92ad10a755f60308c675b20ee06956fe7a04c7ade" Workload="ci--4515.1.0--f--8be9c60ab1-k8s-csi--node--driver--5kbmx-eth0" Dec 12 18:20:14.910156 containerd[1616]: 2025-12-12 18:20:14.807 [INFO][4714] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a202591647e174ded5ac96a92ad10a755f60308c675b20ee06956fe7a04c7ade" Namespace="calico-system" Pod="csi-node-driver-5kbmx" WorkloadEndpoint="ci--4515.1.0--f--8be9c60ab1-k8s-csi--node--driver--5kbmx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--f--8be9c60ab1-k8s-csi--node--driver--5kbmx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3530bcd5-7985-42ba-8587-569180a87a41", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 19, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-f-8be9c60ab1", ContainerID:"", Pod:"csi-node-driver-5kbmx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.101.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali07e2f09e035", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:20:14.910156 containerd[1616]: 2025-12-12 18:20:14.808 [INFO][4714] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.101.137/32] ContainerID="a202591647e174ded5ac96a92ad10a755f60308c675b20ee06956fe7a04c7ade" Namespace="calico-system" Pod="csi-node-driver-5kbmx" WorkloadEndpoint="ci--4515.1.0--f--8be9c60ab1-k8s-csi--node--driver--5kbmx-eth0" Dec 12 18:20:14.910156 containerd[1616]: 2025-12-12 18:20:14.809 [INFO][4714] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali07e2f09e035 ContainerID="a202591647e174ded5ac96a92ad10a755f60308c675b20ee06956fe7a04c7ade" Namespace="calico-system" Pod="csi-node-driver-5kbmx" WorkloadEndpoint="ci--4515.1.0--f--8be9c60ab1-k8s-csi--node--driver--5kbmx-eth0" Dec 12 18:20:14.910156 containerd[1616]: 2025-12-12 18:20:14.848 [INFO][4714] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a202591647e174ded5ac96a92ad10a755f60308c675b20ee06956fe7a04c7ade" Namespace="calico-system" Pod="csi-node-driver-5kbmx" WorkloadEndpoint="ci--4515.1.0--f--8be9c60ab1-k8s-csi--node--driver--5kbmx-eth0" Dec 12 18:20:14.910156 containerd[1616]: 2025-12-12 18:20:14.862 [INFO][4714] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a202591647e174ded5ac96a92ad10a755f60308c675b20ee06956fe7a04c7ade" Namespace="calico-system" Pod="csi-node-driver-5kbmx" WorkloadEndpoint="ci--4515.1.0--f--8be9c60ab1-k8s-csi--node--driver--5kbmx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--f--8be9c60ab1-k8s-csi--node--driver--5kbmx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3530bcd5-7985-42ba-8587-569180a87a41", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 19, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-f-8be9c60ab1", ContainerID:"a202591647e174ded5ac96a92ad10a755f60308c675b20ee06956fe7a04c7ade", Pod:"csi-node-driver-5kbmx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.101.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali07e2f09e035", MAC:"96:93:23:06:db:a7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:20:14.910156 containerd[1616]: 2025-12-12 18:20:14.895 [INFO][4714] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a202591647e174ded5ac96a92ad10a755f60308c675b20ee06956fe7a04c7ade" Namespace="calico-system" Pod="csi-node-driver-5kbmx" WorkloadEndpoint="ci--4515.1.0--f--8be9c60ab1-k8s-csi--node--driver--5kbmx-eth0" Dec 12 18:20:14.922448 containerd[1616]: time="2025-12-12T18:20:14.922330044Z" level=info msg="StartContainer for \"4507812e4b67aa5073e2df904e7f50d3cd22125b20e363c2de3ca3d8464361e0\" returns successfully" Dec 12 18:20:14.990000 audit[4837]: NETFILTER_CFG table=filter:137 family=2 entries=60 op=nft_register_chain pid=4837 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 18:20:14.990000 audit[4837]: SYSCALL arch=c000003e syscall=46 success=yes exit=26688 a0=3 a1=7ffeab2b6620 a2=0 a3=7ffeab2b660c items=0 ppid=3949 pid=4837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:14.990000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 18:20:14.996767 containerd[1616]: time="2025-12-12T18:20:14.996694760Z" level=info msg="connecting to shim a202591647e174ded5ac96a92ad10a755f60308c675b20ee06956fe7a04c7ade" address="unix:///run/containerd/s/9bef6710847e6e71ae2878c6d05a3b69dfbc79f887df5983356e551b3dd3d703" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:20:15.051860 systemd[1]: Started cri-containerd-a202591647e174ded5ac96a92ad10a755f60308c675b20ee06956fe7a04c7ade.scope - libcontainer container a202591647e174ded5ac96a92ad10a755f60308c675b20ee06956fe7a04c7ade. Dec 12 18:20:15.076000 audit: BPF prog-id=254 op=LOAD Dec 12 18:20:15.077000 audit: BPF prog-id=255 op=LOAD Dec 12 18:20:15.077000 audit[4859]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c238 a2=98 a3=0 items=0 ppid=4846 pid=4859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:15.077000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132303235393136343765313734646564356163393661393261643130 Dec 12 18:20:15.077000 audit: BPF prog-id=255 op=UNLOAD Dec 12 18:20:15.077000 audit[4859]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4846 pid=4859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:15.077000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132303235393136343765313734646564356163393661393261643130 Dec 12 18:20:15.077000 audit: BPF prog-id=256 op=LOAD Dec 12 18:20:15.077000 audit[4859]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c488 a2=98 a3=0 items=0 ppid=4846 pid=4859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:15.077000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132303235393136343765313734646564356163393661393261643130 Dec 12 18:20:15.077000 audit: BPF prog-id=257 op=LOAD Dec 12 18:20:15.077000 audit[4859]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00010c218 a2=98 a3=0 items=0 ppid=4846 pid=4859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:15.077000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132303235393136343765313734646564356163393661393261643130 Dec 12 18:20:15.078000 audit: BPF prog-id=257 op=UNLOAD Dec 12 18:20:15.078000 audit[4859]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4846 pid=4859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:15.078000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132303235393136343765313734646564356163393661393261643130 Dec 12 18:20:15.078000 audit: BPF prog-id=256 op=UNLOAD Dec 12 18:20:15.078000 audit[4859]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4846 pid=4859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:15.078000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132303235393136343765313734646564356163393661393261643130 Dec 12 18:20:15.078000 audit: BPF prog-id=258 op=LOAD Dec 12 18:20:15.078000 audit[4859]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c6e8 a2=98 a3=0 items=0 ppid=4846 pid=4859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:15.078000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132303235393136343765313734646564356163393661393261643130 Dec 12 18:20:15.096105 systemd-networkd[1510]: calid06e99e3bbc: Gained IPv6LL Dec 12 18:20:15.120266 containerd[1616]: time="2025-12-12T18:20:15.120201192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5kbmx,Uid:3530bcd5-7985-42ba-8587-569180a87a41,Namespace:calico-system,Attempt:0,} returns sandbox id \"a202591647e174ded5ac96a92ad10a755f60308c675b20ee06956fe7a04c7ade\"" Dec 12 18:20:15.218342 containerd[1616]: time="2025-12-12T18:20:15.218262657Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:20:15.221548 containerd[1616]: time="2025-12-12T18:20:15.221455705Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:20:15.221895 containerd[1616]: time="2025-12-12T18:20:15.221533037Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 18:20:15.222361 kubelet[2819]: E1212 18:20:15.222260 2819 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:20:15.222361 kubelet[2819]: E1212 18:20:15.222331 2819 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:20:15.223032 kubelet[2819]: E1212 18:20:15.222958 2819 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s99m7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-9bb959468-v58pb_calico-apiserver(f4c646c7-47f1-433d-b7c4-005cccecda6a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:20:15.224007 containerd[1616]: time="2025-12-12T18:20:15.223641390Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 18:20:15.225069 kubelet[2819]: E1212 18:20:15.225011 2819 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9bb959468-v58pb" podUID="f4c646c7-47f1-433d-b7c4-005cccecda6a" Dec 12 18:20:15.350862 systemd-networkd[1510]: calia6960c5cee4: Gained IPv6LL Dec 12 18:20:15.580788 containerd[1616]: time="2025-12-12T18:20:15.580723680Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:20:15.582949 containerd[1616]: time="2025-12-12T18:20:15.582751404Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 18:20:15.582949 containerd[1616]: time="2025-12-12T18:20:15.582784505Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 12 18:20:15.584436 kubelet[2819]: E1212 18:20:15.583664 2819 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:20:15.584436 kubelet[2819]: E1212 18:20:15.583756 2819 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:20:15.585305 containerd[1616]: time="2025-12-12T18:20:15.584215988Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 18:20:15.585798 kubelet[2819]: E1212 18:20:15.585700 2819 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mkj66,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7fcbf96c45-vldxn_calico-system(88464bd3-9403-4901-97b2-3cffb941f328): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 18:20:15.586993 kubelet[2819]: E1212 18:20:15.586927 2819 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7fcbf96c45-vldxn" podUID="88464bd3-9403-4901-97b2-3cffb941f328" Dec 12 18:20:15.608826 kubelet[2819]: E1212 18:20:15.608670 2819 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7fcbf96c45-vldxn" podUID="88464bd3-9403-4901-97b2-3cffb941f328" Dec 12 18:20:15.612811 kubelet[2819]: E1212 18:20:15.612497 2819 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 12 18:20:15.617305 kubelet[2819]: E1212 18:20:15.616773 2819 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9bb959468-57r44" podUID="ab0029a5-8491-42f1-b060-fef0c0422b49" Dec 12 18:20:15.617305 kubelet[2819]: E1212 18:20:15.616979 2819 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9bb959468-v58pb" podUID="f4c646c7-47f1-433d-b7c4-005cccecda6a" Dec 12 18:20:15.619621 kubelet[2819]: E1212 18:20:15.618344 2819 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f9c8d5fbb-p96pq" podUID="ed1bf1c8-646f-4c33-9642-90a577c1d786" Dec 12 18:20:15.679000 audit[4886]: NETFILTER_CFG table=filter:138 family=2 entries=14 op=nft_register_rule pid=4886 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:20:15.679000 audit[4886]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff13f58e70 a2=0 a3=7fff13f58e5c items=0 ppid=2939 pid=4886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:15.679000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:20:15.689000 audit[4886]: NETFILTER_CFG table=nat:139 family=2 entries=20 op=nft_register_rule pid=4886 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:20:15.689000 audit[4886]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff13f58e70 a2=0 a3=7fff13f58e5c items=0 ppid=2939 pid=4886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:15.689000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:20:15.723255 kubelet[2819]: I1212 18:20:15.723133 2819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-q4kjx" podStartSLOduration=47.723106305 podStartE2EDuration="47.723106305s" podCreationTimestamp="2025-12-12 18:19:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:20:15.722991851 +0000 UTC m=+53.834959535" watchObservedRunningTime="2025-12-12 18:20:15.723106305 +0000 UTC m=+53.835073969" Dec 12 18:20:15.730000 audit[4888]: NETFILTER_CFG table=filter:140 family=2 entries=14 op=nft_register_rule pid=4888 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:20:15.730000 audit[4888]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd9fb21580 a2=0 a3=7ffd9fb2156c items=0 ppid=2939 pid=4888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:15.730000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:20:15.740000 audit[4888]: NETFILTER_CFG table=nat:141 family=2 entries=20 op=nft_register_rule pid=4888 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:20:15.740000 audit[4888]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffd9fb21580 a2=0 a3=7ffd9fb2156c items=0 ppid=2939 pid=4888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:15.740000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:20:15.970101 containerd[1616]: time="2025-12-12T18:20:15.969913319Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:20:15.972671 containerd[1616]: time="2025-12-12T18:20:15.972584583Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 18:20:15.972917 containerd[1616]: time="2025-12-12T18:20:15.972697817Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 12 18:20:15.973068 kubelet[2819]: E1212 18:20:15.972989 2819 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:20:15.973166 kubelet[2819]: E1212 18:20:15.973094 2819 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:20:15.974759 containerd[1616]: time="2025-12-12T18:20:15.973605627Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 18:20:15.974888 kubelet[2819]: E1212 18:20:15.973915 2819 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-448p8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-h7lvc_calico-system(c57efa3a-e82c-436b-9c07-8cf6921dcd5d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 18:20:15.975244 kubelet[2819]: E1212 18:20:15.975153 2819 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-h7lvc" podUID="c57efa3a-e82c-436b-9c07-8cf6921dcd5d" Dec 12 18:20:16.119074 systemd-networkd[1510]: cali07e2f09e035: Gained IPv6LL Dec 12 18:20:16.333962 containerd[1616]: time="2025-12-12T18:20:16.333757998Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:20:16.336285 containerd[1616]: time="2025-12-12T18:20:16.336098407Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 18:20:16.336285 containerd[1616]: time="2025-12-12T18:20:16.336226699Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 12 18:20:16.336595 kubelet[2819]: E1212 18:20:16.336463 2819 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:20:16.336595 kubelet[2819]: E1212 18:20:16.336580 2819 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:20:16.336907 kubelet[2819]: E1212 18:20:16.336768 2819 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zmd22,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5kbmx_calico-system(3530bcd5-7985-42ba-8587-569180a87a41): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 18:20:16.340913 containerd[1616]: time="2025-12-12T18:20:16.340564393Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 18:20:16.617520 kubelet[2819]: E1212 18:20:16.617339 2819 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 12 18:20:16.623827 kubelet[2819]: E1212 18:20:16.623690 2819 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7fcbf96c45-vldxn" podUID="88464bd3-9403-4901-97b2-3cffb941f328" Dec 12 18:20:16.624144 kubelet[2819]: E1212 18:20:16.623893 2819 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-h7lvc" podUID="c57efa3a-e82c-436b-9c07-8cf6921dcd5d" Dec 12 18:20:16.696125 containerd[1616]: time="2025-12-12T18:20:16.695919411Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:20:16.698532 containerd[1616]: time="2025-12-12T18:20:16.698420567Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 18:20:16.698931 containerd[1616]: time="2025-12-12T18:20:16.698466813Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 12 18:20:16.699288 kubelet[2819]: E1212 18:20:16.699070 2819 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:20:16.699288 kubelet[2819]: E1212 18:20:16.699133 2819 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:20:16.699712 kubelet[2819]: E1212 18:20:16.699311 2819 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zmd22,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5kbmx_calico-system(3530bcd5-7985-42ba-8587-569180a87a41): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 18:20:16.701686 kubelet[2819]: E1212 18:20:16.701611 2819 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5kbmx" podUID="3530bcd5-7985-42ba-8587-569180a87a41" Dec 12 18:20:16.704000 audit[4896]: NETFILTER_CFG table=filter:142 family=2 entries=14 op=nft_register_rule pid=4896 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:20:16.705833 kernel: kauditd_printk_skb: 208 callbacks suppressed Dec 12 18:20:16.705920 kernel: audit: type=1325 audit(1765563616.704:749): table=filter:142 family=2 entries=14 op=nft_register_rule pid=4896 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:20:16.704000 audit[4896]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff69050dd0 a2=0 a3=7fff69050dbc items=0 ppid=2939 pid=4896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:16.704000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:20:16.718723 kernel: audit: type=1300 audit(1765563616.704:749): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff69050dd0 a2=0 a3=7fff69050dbc items=0 ppid=2939 pid=4896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:16.718867 kernel: audit: type=1327 audit(1765563616.704:749): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:20:16.730000 audit[4896]: NETFILTER_CFG table=nat:143 family=2 entries=56 op=nft_register_chain pid=4896 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:20:16.736772 kernel: audit: type=1325 audit(1765563616.730:750): table=nat:143 family=2 entries=56 op=nft_register_chain pid=4896 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:20:16.736918 kernel: audit: type=1300 audit(1765563616.730:750): arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7fff69050dd0 a2=0 a3=7fff69050dbc items=0 ppid=2939 pid=4896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:16.730000 audit[4896]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7fff69050dd0 a2=0 a3=7fff69050dbc items=0 ppid=2939 pid=4896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:16.730000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:20:16.747524 kernel: audit: type=1327 audit(1765563616.730:750): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:20:17.620700 kubelet[2819]: E1212 18:20:17.620425 2819 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 12 18:20:17.626631 kubelet[2819]: E1212 18:20:17.626539 2819 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5kbmx" podUID="3530bcd5-7985-42ba-8587-569180a87a41" Dec 12 18:20:24.129679 containerd[1616]: time="2025-12-12T18:20:24.129500254Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 18:20:24.477910 containerd[1616]: time="2025-12-12T18:20:24.477525286Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:20:24.480249 containerd[1616]: time="2025-12-12T18:20:24.480056116Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 18:20:24.480249 containerd[1616]: time="2025-12-12T18:20:24.480082894Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 12 18:20:24.480963 kubelet[2819]: E1212 18:20:24.480908 2819 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:20:24.481823 kubelet[2819]: E1212 18:20:24.480981 2819 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:20:24.481823 kubelet[2819]: E1212 18:20:24.481137 2819 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:4fe5f617f54d4643bcb5bae7103038b0,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fhrn4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7565c6cc-lrgtt_calico-system(f2c6d001-1096-4786-820b-c2f7a945bcac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 18:20:24.485735 containerd[1616]: time="2025-12-12T18:20:24.485653644Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 18:20:24.831262 containerd[1616]: time="2025-12-12T18:20:24.831098511Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:20:24.833469 containerd[1616]: time="2025-12-12T18:20:24.833385680Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 18:20:24.833730 containerd[1616]: time="2025-12-12T18:20:24.833562098Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 12 18:20:24.833845 kubelet[2819]: E1212 18:20:24.833796 2819 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:20:24.833960 kubelet[2819]: E1212 18:20:24.833865 2819 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:20:24.834202 kubelet[2819]: E1212 18:20:24.834073 2819 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fhrn4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7565c6cc-lrgtt_calico-system(f2c6d001-1096-4786-820b-c2f7a945bcac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 18:20:24.836100 kubelet[2819]: E1212 18:20:24.836006 2819 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7565c6cc-lrgtt" podUID="f2c6d001-1096-4786-820b-c2f7a945bcac" Dec 12 18:20:28.128701 containerd[1616]: time="2025-12-12T18:20:28.128461068Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:20:28.455540 containerd[1616]: time="2025-12-12T18:20:28.455329501Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:20:28.458376 containerd[1616]: time="2025-12-12T18:20:28.458289793Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:20:28.459548 containerd[1616]: time="2025-12-12T18:20:28.458333473Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 18:20:28.459644 kubelet[2819]: E1212 18:20:28.458621 2819 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:20:28.459644 kubelet[2819]: E1212 18:20:28.458688 2819 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:20:28.459644 kubelet[2819]: E1212 18:20:28.458848 2819 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f8d5k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-9bb959468-57r44_calico-apiserver(ab0029a5-8491-42f1-b060-fef0c0422b49): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:20:28.460734 kubelet[2819]: E1212 18:20:28.460546 2819 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9bb959468-57r44" podUID="ab0029a5-8491-42f1-b060-fef0c0422b49" Dec 12 18:20:30.132080 containerd[1616]: time="2025-12-12T18:20:30.131161177Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:20:30.504183 containerd[1616]: time="2025-12-12T18:20:30.504023196Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:20:30.507166 containerd[1616]: time="2025-12-12T18:20:30.507082017Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:20:30.507345 containerd[1616]: time="2025-12-12T18:20:30.507200653Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 18:20:30.507672 kubelet[2819]: E1212 18:20:30.507566 2819 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:20:30.507672 kubelet[2819]: E1212 18:20:30.507641 2819 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:20:30.509121 kubelet[2819]: E1212 18:20:30.508807 2819 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ftxcm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6f9c8d5fbb-p96pq_calico-apiserver(ed1bf1c8-646f-4c33-9642-90a577c1d786): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:20:30.509525 containerd[1616]: time="2025-12-12T18:20:30.509299115Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 18:20:30.510397 kubelet[2819]: E1212 18:20:30.510327 2819 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f9c8d5fbb-p96pq" podUID="ed1bf1c8-646f-4c33-9642-90a577c1d786" Dec 12 18:20:30.839326 containerd[1616]: time="2025-12-12T18:20:30.838970406Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:20:30.841813 containerd[1616]: time="2025-12-12T18:20:30.841567120Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 18:20:30.842253 kubelet[2819]: E1212 18:20:30.842000 2819 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:20:30.842253 kubelet[2819]: E1212 18:20:30.842171 2819 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:20:30.843022 containerd[1616]: time="2025-12-12T18:20:30.841728906Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 12 18:20:30.843022 containerd[1616]: time="2025-12-12T18:20:30.842609645Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:20:30.843205 kubelet[2819]: E1212 18:20:30.842805 2819 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mkj66,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7fcbf96c45-vldxn_calico-system(88464bd3-9403-4901-97b2-3cffb941f328): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 18:20:30.845180 kubelet[2819]: E1212 18:20:30.844209 2819 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7fcbf96c45-vldxn" podUID="88464bd3-9403-4901-97b2-3cffb941f328" Dec 12 18:20:31.133460 kubelet[2819]: E1212 18:20:31.133214 2819 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 12 18:20:31.329248 containerd[1616]: time="2025-12-12T18:20:31.329191199Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:20:31.331691 containerd[1616]: time="2025-12-12T18:20:31.331540466Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:20:31.331691 containerd[1616]: time="2025-12-12T18:20:31.331654458Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 18:20:31.332365 kubelet[2819]: E1212 18:20:31.332284 2819 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:20:31.332870 kubelet[2819]: E1212 18:20:31.332373 2819 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:20:31.332870 kubelet[2819]: E1212 18:20:31.332711 2819 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s99m7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-9bb959468-v58pb_calico-apiserver(f4c646c7-47f1-433d-b7c4-005cccecda6a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:20:31.334347 containerd[1616]: time="2025-12-12T18:20:31.333768473Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 18:20:31.334647 kubelet[2819]: E1212 18:20:31.334424 2819 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9bb959468-v58pb" podUID="f4c646c7-47f1-433d-b7c4-005cccecda6a" Dec 12 18:20:31.651025 containerd[1616]: time="2025-12-12T18:20:31.650955105Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:20:31.653339 containerd[1616]: time="2025-12-12T18:20:31.653209865Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 18:20:31.653339 containerd[1616]: time="2025-12-12T18:20:31.653276694Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 12 18:20:31.654182 kubelet[2819]: E1212 18:20:31.653822 2819 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:20:31.654182 kubelet[2819]: E1212 18:20:31.653881 2819 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:20:31.654182 kubelet[2819]: E1212 18:20:31.654089 2819 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-448p8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-h7lvc_calico-system(c57efa3a-e82c-436b-9c07-8cf6921dcd5d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 18:20:31.656265 kubelet[2819]: E1212 18:20:31.656097 2819 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-h7lvc" podUID="c57efa3a-e82c-436b-9c07-8cf6921dcd5d" Dec 12 18:20:33.128308 containerd[1616]: time="2025-12-12T18:20:33.128237831Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 18:20:33.516978 containerd[1616]: time="2025-12-12T18:20:33.516897482Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:20:33.519561 containerd[1616]: time="2025-12-12T18:20:33.519387199Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 18:20:33.519561 containerd[1616]: time="2025-12-12T18:20:33.519473950Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 12 18:20:33.520407 kubelet[2819]: E1212 18:20:33.520220 2819 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:20:33.520407 kubelet[2819]: E1212 18:20:33.520280 2819 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:20:33.522036 kubelet[2819]: E1212 18:20:33.520887 2819 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zmd22,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5kbmx_calico-system(3530bcd5-7985-42ba-8587-569180a87a41): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 18:20:33.526011 containerd[1616]: time="2025-12-12T18:20:33.525956678Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 18:20:33.876893 containerd[1616]: time="2025-12-12T18:20:33.876583337Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:20:33.879722 containerd[1616]: time="2025-12-12T18:20:33.879623028Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 18:20:33.880225 containerd[1616]: time="2025-12-12T18:20:33.879675690Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 12 18:20:33.880702 kubelet[2819]: E1212 18:20:33.880372 2819 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:20:33.880702 kubelet[2819]: E1212 18:20:33.880449 2819 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:20:33.882274 kubelet[2819]: E1212 18:20:33.882192 2819 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zmd22,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5kbmx_calico-system(3530bcd5-7985-42ba-8587-569180a87a41): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 18:20:33.883870 kubelet[2819]: E1212 18:20:33.883701 2819 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5kbmx" podUID="3530bcd5-7985-42ba-8587-569180a87a41" Dec 12 18:20:34.342692 systemd[1]: Started sshd@9-64.23.253.31:22-147.75.109.163:39720.service - OpenSSH per-connection server daemon (147.75.109.163:39720). Dec 12 18:20:34.342000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-64.23.253.31:22-147.75.109.163:39720 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:34.350755 kernel: audit: type=1130 audit(1765563634.342:751): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-64.23.253.31:22-147.75.109.163:39720 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:34.515000 audit[4926]: USER_ACCT pid=4926 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:20:34.517897 sshd[4926]: Accepted publickey for core from 147.75.109.163 port 39720 ssh2: RSA SHA256:AH9sdAXiVqzWQffvo8h2BLk1KGxpD1YCntXC072RVDo Dec 12 18:20:34.522801 kernel: audit: type=1101 audit(1765563634.515:752): pid=4926 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:20:34.516000 audit[4926]: CRED_ACQ pid=4926 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:20:34.527572 sshd-session[4926]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:20:34.531639 kernel: audit: type=1103 audit(1765563634.516:753): pid=4926 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:20:34.516000 audit[4926]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc87058570 a2=3 a3=0 items=0 ppid=1 pid=4926 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:34.539678 kernel: audit: type=1006 audit(1765563634.516:754): pid=4926 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Dec 12 18:20:34.540319 kernel: audit: type=1300 audit(1765563634.516:754): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc87058570 a2=3 a3=0 items=0 ppid=1 pid=4926 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:34.516000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:20:34.546185 kernel: audit: type=1327 audit(1765563634.516:754): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:20:34.548166 systemd-logind[1590]: New session 10 of user core. Dec 12 18:20:34.554899 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 12 18:20:34.559000 audit[4926]: USER_START pid=4926 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:20:34.569582 kernel: audit: type=1105 audit(1765563634.559:755): pid=4926 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:20:34.569785 kernel: audit: type=1103 audit(1765563634.562:756): pid=4929 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:20:34.562000 audit[4929]: CRED_ACQ pid=4929 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:20:35.377613 sshd[4929]: Connection closed by 147.75.109.163 port 39720 Dec 12 18:20:35.380810 sshd-session[4926]: pam_unix(sshd:session): session closed for user core Dec 12 18:20:35.386000 audit[4926]: USER_END pid=4926 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:20:35.391885 systemd[1]: sshd@9-64.23.253.31:22-147.75.109.163:39720.service: Deactivated successfully. Dec 12 18:20:35.395523 kernel: audit: type=1106 audit(1765563635.386:757): pid=4926 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:20:35.396172 systemd[1]: session-10.scope: Deactivated successfully. Dec 12 18:20:35.398910 systemd-logind[1590]: Session 10 logged out. Waiting for processes to exit. Dec 12 18:20:35.386000 audit[4926]: CRED_DISP pid=4926 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:20:35.406575 kernel: audit: type=1104 audit(1765563635.386:758): pid=4926 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:20:35.392000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-64.23.253.31:22-147.75.109.163:39720 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:35.407951 systemd-logind[1590]: Removed session 10. Dec 12 18:20:37.129051 kubelet[2819]: E1212 18:20:37.128973 2819 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7565c6cc-lrgtt" podUID="f2c6d001-1096-4786-820b-c2f7a945bcac" Dec 12 18:20:39.642196 kubelet[2819]: E1212 18:20:39.642125 2819 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 12 18:20:40.404763 systemd[1]: Started sshd@10-64.23.253.31:22-147.75.109.163:39728.service - OpenSSH per-connection server daemon (147.75.109.163:39728). Dec 12 18:20:40.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-64.23.253.31:22-147.75.109.163:39728 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:40.408015 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 18:20:40.408119 kernel: audit: type=1130 audit(1765563640.403:760): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-64.23.253.31:22-147.75.109.163:39728 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:40.503000 audit[4971]: USER_ACCT pid=4971 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:20:40.508292 sshd-session[4971]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:20:40.509365 sshd[4971]: Accepted publickey for core from 147.75.109.163 port 39728 ssh2: RSA SHA256:AH9sdAXiVqzWQffvo8h2BLk1KGxpD1YCntXC072RVDo Dec 12 18:20:40.505000 audit[4971]: CRED_ACQ pid=4971 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:20:40.514985 kernel: audit: type=1101 audit(1765563640.503:761): pid=4971 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:20:40.515132 kernel: audit: type=1103 audit(1765563640.505:762): pid=4971 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:20:40.521801 kernel: audit: type=1006 audit(1765563640.505:763): pid=4971 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Dec 12 18:20:40.505000 audit[4971]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffce05dc1a0 a2=3 a3=0 items=0 ppid=1 pid=4971 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:40.534521 kernel: audit: type=1300 audit(1765563640.505:763): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffce05dc1a0 a2=3 a3=0 items=0 ppid=1 pid=4971 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:40.534642 kernel: audit: type=1327 audit(1765563640.505:763): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:20:40.505000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:20:40.533693 systemd-logind[1590]: New session 11 of user core. Dec 12 18:20:40.541914 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 12 18:20:40.545000 audit[4971]: USER_START pid=4971 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:20:40.555586 kernel: audit: type=1105 audit(1765563640.545:764): pid=4971 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:20:40.556000 audit[4974]: CRED_ACQ pid=4974 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:20:40.565552 kernel: audit: type=1103 audit(1765563640.556:765): pid=4974 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:20:40.777375 sshd[4974]: Connection closed by 147.75.109.163 port 39728 Dec 12 18:20:40.779569 sshd-session[4971]: pam_unix(sshd:session): session closed for user core Dec 12 18:20:40.782000 audit[4971]: USER_END pid=4971 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:20:40.782000 audit[4971]: CRED_DISP pid=4971 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:20:40.792984 systemd[1]: sshd@10-64.23.253.31:22-147.75.109.163:39728.service: Deactivated successfully. Dec 12 18:20:40.794195 kernel: audit: type=1106 audit(1765563640.782:766): pid=4971 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:20:40.794301 kernel: audit: type=1104 audit(1765563640.782:767): pid=4971 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:20:40.798618 systemd[1]: session-11.scope: Deactivated successfully. Dec 12 18:20:40.792000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-64.23.253.31:22-147.75.109.163:39728 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:40.802945 systemd-logind[1590]: Session 11 logged out. Waiting for processes to exit. Dec 12 18:20:40.806090 systemd-logind[1590]: Removed session 11. Dec 12 18:20:42.132975 kubelet[2819]: E1212 18:20:42.132541 2819 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f9c8d5fbb-p96pq" podUID="ed1bf1c8-646f-4c33-9642-90a577c1d786" Dec 12 18:20:43.139800 kubelet[2819]: E1212 18:20:43.139731 2819 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9bb959468-57r44" podUID="ab0029a5-8491-42f1-b060-fef0c0422b49" Dec 12 18:20:44.133147 kubelet[2819]: E1212 18:20:44.132881 2819 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9bb959468-v58pb" podUID="f4c646c7-47f1-433d-b7c4-005cccecda6a" Dec 12 18:20:45.129278 kubelet[2819]: E1212 18:20:45.129195 2819 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-h7lvc" podUID="c57efa3a-e82c-436b-9c07-8cf6921dcd5d" Dec 12 18:20:45.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-64.23.253.31:22-147.75.109.163:56288 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:45.803824 systemd[1]: Started sshd@11-64.23.253.31:22-147.75.109.163:56288.service - OpenSSH per-connection server daemon (147.75.109.163:56288). Dec 12 18:20:45.805874 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 18:20:45.805918 kernel: audit: type=1130 audit(1765563645.802:769): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-64.23.253.31:22-147.75.109.163:56288 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:45.887000 audit[4992]: USER_ACCT pid=4992 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:20:45.889817 sshd[4992]: Accepted publickey for core from 147.75.109.163 port 56288 ssh2: RSA SHA256:AH9sdAXiVqzWQffvo8h2BLk1KGxpD1YCntXC072RVDo Dec 12 18:20:45.894248 sshd-session[4992]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:20:45.899222 kernel: audit: type=1101 audit(1765563645.887:770): pid=4992 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:20:45.899363 kernel: audit: type=1103 audit(1765563645.892:771): pid=4992 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:20:45.892000 audit[4992]: CRED_ACQ pid=4992 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:20:45.910744 kernel: audit: type=1006 audit(1765563645.892:772): pid=4992 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Dec 12 18:20:45.892000 audit[4992]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd72d01cb0 a2=3 a3=0 items=0 ppid=1 pid=4992 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:45.920593 kernel: audit: type=1300 audit(1765563645.892:772): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd72d01cb0 a2=3 a3=0 items=0 ppid=1 pid=4992 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:45.892000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:20:45.924521 kernel: audit: type=1327 audit(1765563645.892:772): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:20:45.925625 systemd-logind[1590]: New session 12 of user core. Dec 12 18:20:45.930898 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 12 18:20:45.933000 audit[4992]: USER_START pid=4992 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:20:45.943678 kernel: audit: type=1105 audit(1765563645.933:773): pid=4992 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:20:45.943000 audit[4995]: CRED_ACQ pid=4995 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:20:45.951545 kernel: audit: type=1103 audit(1765563645.943:774): pid=4995 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:20:46.080816 sshd[4995]: Connection closed by 147.75.109.163 port 56288 Dec 12 18:20:46.081976 sshd-session[4992]: pam_unix(sshd:session): session closed for user core Dec 12 18:20:46.082000 audit[4992]: USER_END pid=4992 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:20:46.088933 systemd[1]: sshd@11-64.23.253.31:22-147.75.109.163:56288.service: Deactivated successfully. Dec 12 18:20:46.093420 systemd[1]: session-12.scope: Deactivated successfully. Dec 12 18:20:46.094513 kernel: audit: type=1106 audit(1765563646.082:775): pid=4992 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:20:46.082000 audit[4992]: CRED_DISP pid=4992 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:20:46.102585 kernel: audit: type=1104 audit(1765563646.082:776): pid=4992 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:20:46.087000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-64.23.253.31:22-147.75.109.163:56288 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:46.104380 systemd-logind[1590]: Session 12 logged out. Waiting for processes to exit. Dec 12 18:20:46.105840 systemd-logind[1590]: Removed session 12. Dec 12 18:20:46.126199 kubelet[2819]: E1212 18:20:46.126062 2819 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 12 18:20:46.132035 kubelet[2819]: E1212 18:20:46.131800 2819 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7fcbf96c45-vldxn" podUID="88464bd3-9403-4901-97b2-3cffb941f328" Dec 12 18:20:46.132035 kubelet[2819]: E1212 18:20:46.131611 2819 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 12 18:20:48.132703 kubelet[2819]: E1212 18:20:48.132645 2819 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5kbmx" podUID="3530bcd5-7985-42ba-8587-569180a87a41" Dec 12 18:20:49.126673 kubelet[2819]: E1212 18:20:49.126622 2819 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 12 18:20:51.109597 systemd[1]: Started sshd@12-64.23.253.31:22-147.75.109.163:56290.service - OpenSSH per-connection server daemon (147.75.109.163:56290). Dec 12 18:20:51.115342 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 18:20:51.115460 kernel: audit: type=1130 audit(1765563651.108:778): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-64.23.253.31:22-147.75.109.163:56290 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:51.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-64.23.253.31:22-147.75.109.163:56290 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:51.128044 containerd[1616]: time="2025-12-12T18:20:51.127998190Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 18:20:51.287000 audit[5008]: USER_ACCT pid=5008 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:20:51.289167 sshd[5008]: Accepted publickey for core from 147.75.109.163 port 56290 ssh2: RSA SHA256:AH9sdAXiVqzWQffvo8h2BLk1KGxpD1YCntXC072RVDo Dec 12 18:20:51.294683 sshd-session[5008]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:20:51.296920 kernel: audit: type=1101 audit(1765563651.287:779): pid=5008 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:20:51.297048 kernel: audit: type=1103 audit(1765563651.291:780): pid=5008 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:20:51.291000 audit[5008]: CRED_ACQ pid=5008 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:20:51.304594 kernel: audit: type=1006 audit(1765563651.291:781): pid=5008 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Dec 12 18:20:51.291000 audit[5008]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc3f925720 a2=3 a3=0 items=0 ppid=1 pid=5008 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:51.315734 kernel: audit: type=1300 audit(1765563651.291:781): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc3f925720 a2=3 a3=0 items=0 ppid=1 pid=5008 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:51.315852 kernel: audit: type=1327 audit(1765563651.291:781): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:20:51.291000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:20:51.323584 systemd-logind[1590]: New session 13 of user core. Dec 12 18:20:51.328900 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 12 18:20:51.333000 audit[5008]: USER_START pid=5008 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:20:51.337000 audit[5017]: CRED_ACQ pid=5017 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:20:51.347510 kernel: audit: type=1105 audit(1765563651.333:782): pid=5008 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:20:51.347659 kernel: audit: type=1103 audit(1765563651.337:783): pid=5017 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:20:51.506971 containerd[1616]: time="2025-12-12T18:20:51.506837630Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:20:51.510803 containerd[1616]: time="2025-12-12T18:20:51.510672637Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 18:20:51.510803 containerd[1616]: time="2025-12-12T18:20:51.510803469Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 12 18:20:51.512456 kubelet[2819]: E1212 18:20:51.512381 2819 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:20:51.514895 kubelet[2819]: E1212 18:20:51.512472 2819 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:20:51.514895 kubelet[2819]: E1212 18:20:51.512681 2819 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:4fe5f617f54d4643bcb5bae7103038b0,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fhrn4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7565c6cc-lrgtt_calico-system(f2c6d001-1096-4786-820b-c2f7a945bcac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 18:20:51.519600 containerd[1616]: time="2025-12-12T18:20:51.519545615Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 18:20:51.593541 sshd[5017]: Connection closed by 147.75.109.163 port 56290 Dec 12 18:20:51.596001 sshd-session[5008]: pam_unix(sshd:session): session closed for user core Dec 12 18:20:51.598000 audit[5008]: USER_END pid=5008 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:20:51.611019 kernel: audit: type=1106 audit(1765563651.598:784): pid=5008 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:20:51.598000 audit[5008]: CRED_DISP pid=5008 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:20:51.620611 kernel: audit: type=1104 audit(1765563651.598:785): pid=5008 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:20:51.621087 systemd[1]: sshd@12-64.23.253.31:22-147.75.109.163:56290.service: Deactivated successfully. Dec 12 18:20:51.620000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-64.23.253.31:22-147.75.109.163:56290 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:51.625310 systemd[1]: session-13.scope: Deactivated successfully. Dec 12 18:20:51.629119 systemd-logind[1590]: Session 13 logged out. Waiting for processes to exit. Dec 12 18:20:51.637000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-64.23.253.31:22-147.75.109.163:56300 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:51.639244 systemd[1]: Started sshd@13-64.23.253.31:22-147.75.109.163:56300.service - OpenSSH per-connection server daemon (147.75.109.163:56300). Dec 12 18:20:51.642731 systemd-logind[1590]: Removed session 13. Dec 12 18:20:51.724000 audit[5030]: USER_ACCT pid=5030 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:20:51.725890 sshd[5030]: Accepted publickey for core from 147.75.109.163 port 56300 ssh2: RSA SHA256:AH9sdAXiVqzWQffvo8h2BLk1KGxpD1YCntXC072RVDo Dec 12 18:20:51.725000 audit[5030]: CRED_ACQ pid=5030 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:20:51.726000 audit[5030]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffceeac1a00 a2=3 a3=0 items=0 ppid=1 pid=5030 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:51.726000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:20:51.728409 sshd-session[5030]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:20:51.741371 systemd-logind[1590]: New session 14 of user core. Dec 12 18:20:51.747882 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 12 18:20:51.752000 audit[5030]: USER_START pid=5030 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:20:51.755000 audit[5033]: CRED_ACQ pid=5033 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:20:51.975781 containerd[1616]: time="2025-12-12T18:20:51.975713324Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:20:51.977781 containerd[1616]: time="2025-12-12T18:20:51.977691749Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 18:20:51.977977 containerd[1616]: time="2025-12-12T18:20:51.977804053Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 12 18:20:51.978066 kubelet[2819]: E1212 18:20:51.978004 2819 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:20:51.978151 kubelet[2819]: E1212 18:20:51.978065 2819 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:20:51.978641 kubelet[2819]: E1212 18:20:51.978199 2819 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fhrn4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7565c6cc-lrgtt_calico-system(f2c6d001-1096-4786-820b-c2f7a945bcac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 18:20:51.979601 kubelet[2819]: E1212 18:20:51.979528 2819 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7565c6cc-lrgtt" podUID="f2c6d001-1096-4786-820b-c2f7a945bcac" Dec 12 18:20:51.998756 sshd[5033]: Connection closed by 147.75.109.163 port 56300 Dec 12 18:20:51.999741 sshd-session[5030]: pam_unix(sshd:session): session closed for user core Dec 12 18:20:52.002000 audit[5030]: USER_END pid=5030 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:20:52.003000 audit[5030]: CRED_DISP pid=5030 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:20:52.017000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-64.23.253.31:22-147.75.109.163:56300 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:52.018408 systemd[1]: sshd@13-64.23.253.31:22-147.75.109.163:56300.service: Deactivated successfully. Dec 12 18:20:52.024328 systemd[1]: session-14.scope: Deactivated successfully. Dec 12 18:20:52.027838 systemd-logind[1590]: Session 14 logged out. Waiting for processes to exit. Dec 12 18:20:52.033046 systemd[1]: Started sshd@14-64.23.253.31:22-147.75.109.163:56304.service - OpenSSH per-connection server daemon (147.75.109.163:56304). Dec 12 18:20:52.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-64.23.253.31:22-147.75.109.163:56304 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:52.038396 systemd-logind[1590]: Removed session 14. Dec 12 18:20:52.145000 audit[5043]: USER_ACCT pid=5043 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:20:52.147350 sshd[5043]: Accepted publickey for core from 147.75.109.163 port 56304 ssh2: RSA SHA256:AH9sdAXiVqzWQffvo8h2BLk1KGxpD1YCntXC072RVDo Dec 12 18:20:52.148000 audit[5043]: CRED_ACQ pid=5043 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:20:52.148000 audit[5043]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff9890ef60 a2=3 a3=0 items=0 ppid=1 pid=5043 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:52.148000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:20:52.150510 sshd-session[5043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:20:52.159588 systemd-logind[1590]: New session 15 of user core. Dec 12 18:20:52.166911 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 12 18:20:52.171000 audit[5043]: USER_START pid=5043 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:20:52.174000 audit[5046]: CRED_ACQ pid=5046 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:20:52.323894 sshd[5046]: Connection closed by 147.75.109.163 port 56304 Dec 12 18:20:52.325836 sshd-session[5043]: pam_unix(sshd:session): session closed for user core Dec 12 18:20:52.326000 audit[5043]: USER_END pid=5043 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:20:52.326000 audit[5043]: CRED_DISP pid=5043 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:20:52.330898 systemd[1]: sshd@14-64.23.253.31:22-147.75.109.163:56304.service: Deactivated successfully. Dec 12 18:20:52.329000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-64.23.253.31:22-147.75.109.163:56304 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:52.334567 systemd[1]: session-15.scope: Deactivated successfully. Dec 12 18:20:52.338634 systemd-logind[1590]: Session 15 logged out. Waiting for processes to exit. Dec 12 18:20:52.340557 systemd-logind[1590]: Removed session 15. Dec 12 18:20:54.130350 containerd[1616]: time="2025-12-12T18:20:54.128679832Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:20:54.476904 containerd[1616]: time="2025-12-12T18:20:54.476593707Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:20:54.480509 containerd[1616]: time="2025-12-12T18:20:54.480349590Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 18:20:54.480509 containerd[1616]: time="2025-12-12T18:20:54.480452032Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:20:54.481163 kubelet[2819]: E1212 18:20:54.481102 2819 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:20:54.482291 kubelet[2819]: E1212 18:20:54.481862 2819 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:20:54.482748 kubelet[2819]: E1212 18:20:54.482565 2819 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ftxcm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6f9c8d5fbb-p96pq_calico-apiserver(ed1bf1c8-646f-4c33-9642-90a577c1d786): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:20:54.483991 kubelet[2819]: E1212 18:20:54.483944 2819 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f9c8d5fbb-p96pq" podUID="ed1bf1c8-646f-4c33-9642-90a577c1d786" Dec 12 18:20:56.132960 containerd[1616]: time="2025-12-12T18:20:56.131456946Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:20:56.493553 containerd[1616]: time="2025-12-12T18:20:56.493360866Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:20:56.495711 containerd[1616]: time="2025-12-12T18:20:56.495437116Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:20:56.495711 containerd[1616]: time="2025-12-12T18:20:56.495516796Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 18:20:56.498104 kubelet[2819]: E1212 18:20:56.496467 2819 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:20:56.498104 kubelet[2819]: E1212 18:20:56.496581 2819 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:20:56.498104 kubelet[2819]: E1212 18:20:56.496782 2819 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f8d5k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-9bb959468-57r44_calico-apiserver(ab0029a5-8491-42f1-b060-fef0c0422b49): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:20:56.498885 kubelet[2819]: E1212 18:20:56.498524 2819 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9bb959468-57r44" podUID="ab0029a5-8491-42f1-b060-fef0c0422b49" Dec 12 18:20:57.130602 containerd[1616]: time="2025-12-12T18:20:57.129787669Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:20:57.346192 systemd[1]: Started sshd@15-64.23.253.31:22-147.75.109.163:36242.service - OpenSSH per-connection server daemon (147.75.109.163:36242). Dec 12 18:20:57.356071 kernel: kauditd_printk_skb: 23 callbacks suppressed Dec 12 18:20:57.356299 kernel: audit: type=1130 audit(1765563657.346:805): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-64.23.253.31:22-147.75.109.163:36242 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:57.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-64.23.253.31:22-147.75.109.163:36242 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:57.456000 audit[5064]: USER_ACCT pid=5064 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:20:57.459091 sshd[5064]: Accepted publickey for core from 147.75.109.163 port 36242 ssh2: RSA SHA256:AH9sdAXiVqzWQffvo8h2BLk1KGxpD1YCntXC072RVDo Dec 12 18:20:57.459000 audit[5064]: CRED_ACQ pid=5064 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:20:57.467036 sshd-session[5064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:20:57.469203 kernel: audit: type=1101 audit(1765563657.456:806): pid=5064 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:20:57.475121 kernel: audit: type=1103 audit(1765563657.459:807): pid=5064 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:20:57.475681 kernel: audit: type=1006 audit(1765563657.459:808): pid=5064 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Dec 12 18:20:57.459000 audit[5064]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcdbfff7a0 a2=3 a3=0 items=0 ppid=1 pid=5064 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:57.489423 kernel: audit: type=1300 audit(1765563657.459:808): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcdbfff7a0 a2=3 a3=0 items=0 ppid=1 pid=5064 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:20:57.490241 containerd[1616]: time="2025-12-12T18:20:57.488522989Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:20:57.489640 systemd-logind[1590]: New session 16 of user core. Dec 12 18:20:57.459000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:20:57.495050 kubelet[2819]: E1212 18:20:57.494431 2819 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:20:57.495050 kubelet[2819]: E1212 18:20:57.494522 2819 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:20:57.495178 containerd[1616]: time="2025-12-12T18:20:57.491644757Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 18:20:57.495178 containerd[1616]: time="2025-12-12T18:20:57.491817793Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:20:57.495508 kernel: audit: type=1327 audit(1765563657.459:808): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:20:57.496366 kubelet[2819]: E1212 18:20:57.496270 2819 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s99m7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-9bb959468-v58pb_calico-apiserver(f4c646c7-47f1-433d-b7c4-005cccecda6a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:20:57.497215 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 12 18:20:57.499838 kubelet[2819]: E1212 18:20:57.499767 2819 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9bb959468-v58pb" podUID="f4c646c7-47f1-433d-b7c4-005cccecda6a" Dec 12 18:20:57.506000 audit[5064]: USER_START pid=5064 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:20:57.518471 kernel: audit: type=1105 audit(1765563657.506:809): pid=5064 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:20:57.521000 audit[5067]: CRED_ACQ pid=5067 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:20:57.531515 kernel: audit: type=1103 audit(1765563657.521:810): pid=5067 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:20:57.758328 sshd[5067]: Connection closed by 147.75.109.163 port 36242 Dec 12 18:20:57.759247 sshd-session[5064]: pam_unix(sshd:session): session closed for user core Dec 12 18:20:57.763000 audit[5064]: USER_END pid=5064 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:20:57.773474 systemd[1]: sshd@15-64.23.253.31:22-147.75.109.163:36242.service: Deactivated successfully. Dec 12 18:20:57.773747 kernel: audit: type=1106 audit(1765563657.763:811): pid=5064 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:20:57.776948 systemd[1]: session-16.scope: Deactivated successfully. Dec 12 18:20:57.764000 audit[5064]: CRED_DISP pid=5064 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:20:57.781825 systemd-logind[1590]: Session 16 logged out. Waiting for processes to exit. Dec 12 18:20:57.785205 kernel: audit: type=1104 audit(1765563657.764:812): pid=5064 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:20:57.773000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-64.23.253.31:22-147.75.109.163:36242 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:20:57.787258 systemd-logind[1590]: Removed session 16. Dec 12 18:20:58.129541 containerd[1616]: time="2025-12-12T18:20:58.129376268Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 18:20:58.467734 containerd[1616]: time="2025-12-12T18:20:58.467384695Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:20:58.470179 containerd[1616]: time="2025-12-12T18:20:58.470053043Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 18:20:58.470179 containerd[1616]: time="2025-12-12T18:20:58.470124440Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 12 18:20:58.470593 kubelet[2819]: E1212 18:20:58.470411 2819 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:20:58.470593 kubelet[2819]: E1212 18:20:58.470467 2819 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:20:58.471791 kubelet[2819]: E1212 18:20:58.471408 2819 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-448p8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-h7lvc_calico-system(c57efa3a-e82c-436b-9c07-8cf6921dcd5d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 18:20:58.472872 kubelet[2819]: E1212 18:20:58.472813 2819 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-h7lvc" podUID="c57efa3a-e82c-436b-9c07-8cf6921dcd5d" Dec 12 18:21:00.130690 containerd[1616]: time="2025-12-12T18:21:00.130437660Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 18:21:00.437211 containerd[1616]: time="2025-12-12T18:21:00.437050110Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:21:00.439310 containerd[1616]: time="2025-12-12T18:21:00.439117234Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 18:21:00.439310 containerd[1616]: time="2025-12-12T18:21:00.439258889Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 12 18:21:00.439799 kubelet[2819]: E1212 18:21:00.439661 2819 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:21:00.440885 kubelet[2819]: E1212 18:21:00.439753 2819 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:21:00.440885 kubelet[2819]: E1212 18:21:00.440243 2819 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mkj66,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7fcbf96c45-vldxn_calico-system(88464bd3-9403-4901-97b2-3cffb941f328): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 18:21:00.442407 kubelet[2819]: E1212 18:21:00.441440 2819 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7fcbf96c45-vldxn" podUID="88464bd3-9403-4901-97b2-3cffb941f328" Dec 12 18:21:02.778000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-64.23.253.31:22-147.75.109.163:49190 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:21:02.778349 systemd[1]: Started sshd@16-64.23.253.31:22-147.75.109.163:49190.service - OpenSSH per-connection server daemon (147.75.109.163:49190). Dec 12 18:21:02.780095 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 18:21:02.780236 kernel: audit: type=1130 audit(1765563662.778:814): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-64.23.253.31:22-147.75.109.163:49190 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:21:02.873000 audit[5081]: USER_ACCT pid=5081 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:02.874541 sshd[5081]: Accepted publickey for core from 147.75.109.163 port 49190 ssh2: RSA SHA256:AH9sdAXiVqzWQffvo8h2BLk1KGxpD1YCntXC072RVDo Dec 12 18:21:02.878344 sshd-session[5081]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:21:02.883716 kernel: audit: type=1101 audit(1765563662.873:815): pid=5081 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:02.877000 audit[5081]: CRED_ACQ pid=5081 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:02.892806 kernel: audit: type=1103 audit(1765563662.877:816): pid=5081 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:02.892988 kernel: audit: type=1006 audit(1765563662.877:817): pid=5081 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Dec 12 18:21:02.877000 audit[5081]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffebbdd62c0 a2=3 a3=0 items=0 ppid=1 pid=5081 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:02.898426 kernel: audit: type=1300 audit(1765563662.877:817): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffebbdd62c0 a2=3 a3=0 items=0 ppid=1 pid=5081 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:02.877000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:21:02.907527 kernel: audit: type=1327 audit(1765563662.877:817): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:21:02.912854 systemd-logind[1590]: New session 17 of user core. Dec 12 18:21:02.919885 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 12 18:21:02.925000 audit[5081]: USER_START pid=5081 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:02.934598 kernel: audit: type=1105 audit(1765563662.925:818): pid=5081 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:02.934748 kernel: audit: type=1103 audit(1765563662.934:819): pid=5084 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:02.934000 audit[5084]: CRED_ACQ pid=5084 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:03.119779 sshd[5084]: Connection closed by 147.75.109.163 port 49190 Dec 12 18:21:03.121819 sshd-session[5081]: pam_unix(sshd:session): session closed for user core Dec 12 18:21:03.123000 audit[5081]: USER_END pid=5081 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:03.133552 kernel: audit: type=1106 audit(1765563663.123:820): pid=5081 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:03.135153 systemd[1]: sshd@16-64.23.253.31:22-147.75.109.163:49190.service: Deactivated successfully. Dec 12 18:21:03.139137 kubelet[2819]: E1212 18:21:03.138876 2819 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7565c6cc-lrgtt" podUID="f2c6d001-1096-4786-820b-c2f7a945bcac" Dec 12 18:21:03.140043 containerd[1616]: time="2025-12-12T18:21:03.139331764Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 18:21:03.145904 systemd[1]: session-17.scope: Deactivated successfully. Dec 12 18:21:03.124000 audit[5081]: CRED_DISP pid=5081 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:03.153821 kernel: audit: type=1104 audit(1765563663.124:821): pid=5081 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:03.153589 systemd-logind[1590]: Session 17 logged out. Waiting for processes to exit. Dec 12 18:21:03.135000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-64.23.253.31:22-147.75.109.163:49190 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:21:03.156311 systemd-logind[1590]: Removed session 17. Dec 12 18:21:03.493003 containerd[1616]: time="2025-12-12T18:21:03.492956625Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:21:03.495643 containerd[1616]: time="2025-12-12T18:21:03.495464058Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 18:21:03.495643 containerd[1616]: time="2025-12-12T18:21:03.495593366Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 12 18:21:03.495892 kubelet[2819]: E1212 18:21:03.495829 2819 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:21:03.495960 kubelet[2819]: E1212 18:21:03.495894 2819 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:21:03.496244 kubelet[2819]: E1212 18:21:03.496054 2819 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zmd22,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5kbmx_calico-system(3530bcd5-7985-42ba-8587-569180a87a41): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 18:21:03.499026 containerd[1616]: time="2025-12-12T18:21:03.498938930Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 18:21:03.861363 containerd[1616]: time="2025-12-12T18:21:03.861036478Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:21:03.863006 containerd[1616]: time="2025-12-12T18:21:03.862867942Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 18:21:03.863006 containerd[1616]: time="2025-12-12T18:21:03.862947034Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 12 18:21:03.863285 kubelet[2819]: E1212 18:21:03.863186 2819 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:21:03.863285 kubelet[2819]: E1212 18:21:03.863262 2819 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:21:03.863552 kubelet[2819]: E1212 18:21:03.863457 2819 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zmd22,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5kbmx_calico-system(3530bcd5-7985-42ba-8587-569180a87a41): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 18:21:03.865416 kubelet[2819]: E1212 18:21:03.865329 2819 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5kbmx" podUID="3530bcd5-7985-42ba-8587-569180a87a41" Dec 12 18:21:07.127422 kubelet[2819]: E1212 18:21:07.127356 2819 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 12 18:21:07.129578 kubelet[2819]: E1212 18:21:07.129515 2819 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9bb959468-57r44" podUID="ab0029a5-8491-42f1-b060-fef0c0422b49" Dec 12 18:21:08.134173 kubelet[2819]: E1212 18:21:08.134100 2819 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f9c8d5fbb-p96pq" podUID="ed1bf1c8-646f-4c33-9642-90a577c1d786" Dec 12 18:21:08.147790 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 18:21:08.147948 kernel: audit: type=1130 audit(1765563668.144:823): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-64.23.253.31:22-147.75.109.163:49202 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:21:08.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-64.23.253.31:22-147.75.109.163:49202 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:21:08.144833 systemd[1]: Started sshd@17-64.23.253.31:22-147.75.109.163:49202.service - OpenSSH per-connection server daemon (147.75.109.163:49202). Dec 12 18:21:08.238000 audit[5095]: USER_ACCT pid=5095 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:08.238973 sshd[5095]: Accepted publickey for core from 147.75.109.163 port 49202 ssh2: RSA SHA256:AH9sdAXiVqzWQffvo8h2BLk1KGxpD1YCntXC072RVDo Dec 12 18:21:08.243838 sshd-session[5095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:21:08.242000 audit[5095]: CRED_ACQ pid=5095 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:08.248360 kernel: audit: type=1101 audit(1765563668.238:824): pid=5095 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:08.248468 kernel: audit: type=1103 audit(1765563668.242:825): pid=5095 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:08.242000 audit[5095]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc982f9c70 a2=3 a3=0 items=0 ppid=1 pid=5095 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:08.259737 systemd-logind[1590]: New session 18 of user core. Dec 12 18:21:08.263551 kernel: audit: type=1006 audit(1765563668.242:826): pid=5095 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Dec 12 18:21:08.263606 kernel: audit: type=1300 audit(1765563668.242:826): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc982f9c70 a2=3 a3=0 items=0 ppid=1 pid=5095 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:08.242000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:21:08.270512 kernel: audit: type=1327 audit(1765563668.242:826): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:21:08.272157 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 12 18:21:08.278000 audit[5095]: USER_START pid=5095 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:08.290603 kernel: audit: type=1105 audit(1765563668.278:827): pid=5095 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:08.290767 kernel: audit: type=1103 audit(1765563668.281:828): pid=5098 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:08.281000 audit[5098]: CRED_ACQ pid=5098 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:08.427329 sshd[5098]: Connection closed by 147.75.109.163 port 49202 Dec 12 18:21:08.429012 sshd-session[5095]: pam_unix(sshd:session): session closed for user core Dec 12 18:21:08.431000 audit[5095]: USER_END pid=5095 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:08.438678 systemd[1]: sshd@17-64.23.253.31:22-147.75.109.163:49202.service: Deactivated successfully. Dec 12 18:21:08.432000 audit[5095]: CRED_DISP pid=5095 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:08.443709 systemd[1]: session-18.scope: Deactivated successfully. Dec 12 18:21:08.445408 kernel: audit: type=1106 audit(1765563668.431:829): pid=5095 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:08.445657 kernel: audit: type=1104 audit(1765563668.432:830): pid=5095 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:08.447958 systemd-logind[1590]: Session 18 logged out. Waiting for processes to exit. Dec 12 18:21:08.435000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-64.23.253.31:22-147.75.109.163:49202 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:21:08.453265 systemd-logind[1590]: Removed session 18. Dec 12 18:21:10.130590 kubelet[2819]: E1212 18:21:10.129453 2819 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-h7lvc" podUID="c57efa3a-e82c-436b-9c07-8cf6921dcd5d" Dec 12 18:21:12.131183 kubelet[2819]: E1212 18:21:12.130368 2819 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9bb959468-v58pb" podUID="f4c646c7-47f1-433d-b7c4-005cccecda6a" Dec 12 18:21:13.454828 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 18:21:13.455083 kernel: audit: type=1130 audit(1765563673.452:832): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-64.23.253.31:22-147.75.109.163:34542 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:21:13.452000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-64.23.253.31:22-147.75.109.163:34542 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:21:13.452940 systemd[1]: Started sshd@18-64.23.253.31:22-147.75.109.163:34542.service - OpenSSH per-connection server daemon (147.75.109.163:34542). Dec 12 18:21:13.552000 audit[5135]: USER_ACCT pid=5135 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:13.553387 sshd[5135]: Accepted publickey for core from 147.75.109.163 port 34542 ssh2: RSA SHA256:AH9sdAXiVqzWQffvo8h2BLk1KGxpD1YCntXC072RVDo Dec 12 18:21:13.560543 kernel: audit: type=1101 audit(1765563673.552:833): pid=5135 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:13.561000 audit[5135]: CRED_ACQ pid=5135 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:13.562413 sshd-session[5135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:21:13.570127 kernel: audit: type=1103 audit(1765563673.561:834): pid=5135 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:13.570254 kernel: audit: type=1006 audit(1765563673.561:835): pid=5135 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Dec 12 18:21:13.561000 audit[5135]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff507db510 a2=3 a3=0 items=0 ppid=1 pid=5135 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:13.575654 kernel: audit: type=1300 audit(1765563673.561:835): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff507db510 a2=3 a3=0 items=0 ppid=1 pid=5135 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:13.576580 systemd-logind[1590]: New session 19 of user core. Dec 12 18:21:13.561000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:21:13.583534 kernel: audit: type=1327 audit(1765563673.561:835): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:21:13.585184 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 12 18:21:13.591000 audit[5135]: USER_START pid=5135 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:13.599509 kernel: audit: type=1105 audit(1765563673.591:836): pid=5135 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:13.594000 audit[5138]: CRED_ACQ pid=5138 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:13.607537 kernel: audit: type=1103 audit(1765563673.594:837): pid=5138 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:13.744973 sshd[5138]: Connection closed by 147.75.109.163 port 34542 Dec 12 18:21:13.748106 sshd-session[5135]: pam_unix(sshd:session): session closed for user core Dec 12 18:21:13.750000 audit[5135]: USER_END pid=5135 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:13.758577 kernel: audit: type=1106 audit(1765563673.750:838): pid=5135 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:13.750000 audit[5135]: CRED_DISP pid=5135 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:13.763284 systemd[1]: sshd@18-64.23.253.31:22-147.75.109.163:34542.service: Deactivated successfully. Dec 12 18:21:13.765665 kernel: audit: type=1104 audit(1765563673.750:839): pid=5135 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:13.760000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-64.23.253.31:22-147.75.109.163:34542 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:21:13.769212 systemd[1]: session-19.scope: Deactivated successfully. Dec 12 18:21:13.771248 systemd-logind[1590]: Session 19 logged out. Waiting for processes to exit. Dec 12 18:21:13.777943 systemd[1]: Started sshd@19-64.23.253.31:22-147.75.109.163:34558.service - OpenSSH per-connection server daemon (147.75.109.163:34558). Dec 12 18:21:13.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-64.23.253.31:22-147.75.109.163:34558 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:21:13.781654 systemd-logind[1590]: Removed session 19. Dec 12 18:21:13.865000 audit[5149]: USER_ACCT pid=5149 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:13.865947 sshd[5149]: Accepted publickey for core from 147.75.109.163 port 34558 ssh2: RSA SHA256:AH9sdAXiVqzWQffvo8h2BLk1KGxpD1YCntXC072RVDo Dec 12 18:21:13.866000 audit[5149]: CRED_ACQ pid=5149 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:13.866000 audit[5149]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe87537110 a2=3 a3=0 items=0 ppid=1 pid=5149 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:13.866000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:21:13.868451 sshd-session[5149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:21:13.880943 systemd-logind[1590]: New session 20 of user core. Dec 12 18:21:13.889969 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 12 18:21:13.894000 audit[5149]: USER_START pid=5149 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:13.898000 audit[5152]: CRED_ACQ pid=5152 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:14.139601 kubelet[2819]: E1212 18:21:14.137719 2819 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7565c6cc-lrgtt" podUID="f2c6d001-1096-4786-820b-c2f7a945bcac" Dec 12 18:21:14.288003 sshd[5152]: Connection closed by 147.75.109.163 port 34558 Dec 12 18:21:14.290449 sshd-session[5149]: pam_unix(sshd:session): session closed for user core Dec 12 18:21:14.296000 audit[5149]: USER_END pid=5149 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:14.297000 audit[5149]: CRED_DISP pid=5149 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:14.310971 systemd[1]: Started sshd@20-64.23.253.31:22-147.75.109.163:34560.service - OpenSSH per-connection server daemon (147.75.109.163:34560). Dec 12 18:21:14.310000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-64.23.253.31:22-147.75.109.163:34560 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:21:14.313798 systemd[1]: sshd@19-64.23.253.31:22-147.75.109.163:34558.service: Deactivated successfully. Dec 12 18:21:14.313000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-64.23.253.31:22-147.75.109.163:34558 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:21:14.317371 systemd[1]: session-20.scope: Deactivated successfully. Dec 12 18:21:14.322720 systemd-logind[1590]: Session 20 logged out. Waiting for processes to exit. Dec 12 18:21:14.325249 systemd-logind[1590]: Removed session 20. Dec 12 18:21:14.440000 audit[5159]: USER_ACCT pid=5159 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:14.442279 sshd[5159]: Accepted publickey for core from 147.75.109.163 port 34560 ssh2: RSA SHA256:AH9sdAXiVqzWQffvo8h2BLk1KGxpD1YCntXC072RVDo Dec 12 18:21:14.442000 audit[5159]: CRED_ACQ pid=5159 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:14.442000 audit[5159]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff39b77340 a2=3 a3=0 items=0 ppid=1 pid=5159 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:14.442000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:21:14.444518 sshd-session[5159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:21:14.456559 systemd-logind[1590]: New session 21 of user core. Dec 12 18:21:14.461884 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 12 18:21:14.467000 audit[5159]: USER_START pid=5159 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:14.471000 audit[5165]: CRED_ACQ pid=5165 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:15.131004 kubelet[2819]: E1212 18:21:15.130778 2819 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7fcbf96c45-vldxn" podUID="88464bd3-9403-4901-97b2-3cffb941f328" Dec 12 18:21:15.367976 sshd[5165]: Connection closed by 147.75.109.163 port 34560 Dec 12 18:21:15.369151 sshd-session[5159]: pam_unix(sshd:session): session closed for user core Dec 12 18:21:15.373000 audit[5159]: USER_END pid=5159 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:15.374000 audit[5159]: CRED_DISP pid=5159 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:15.387000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-64.23.253.31:22-147.75.109.163:34560 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:21:15.387034 systemd[1]: sshd@20-64.23.253.31:22-147.75.109.163:34560.service: Deactivated successfully. Dec 12 18:21:15.395648 systemd[1]: session-21.scope: Deactivated successfully. Dec 12 18:21:15.401029 systemd-logind[1590]: Session 21 logged out. Waiting for processes to exit. Dec 12 18:21:15.405661 systemd-logind[1590]: Removed session 21. Dec 12 18:21:15.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-64.23.253.31:22-147.75.109.163:34568 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:21:15.408522 systemd[1]: Started sshd@21-64.23.253.31:22-147.75.109.163:34568.service - OpenSSH per-connection server daemon (147.75.109.163:34568). Dec 12 18:21:15.416000 audit[5177]: NETFILTER_CFG table=filter:144 family=2 entries=26 op=nft_register_rule pid=5177 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:21:15.416000 audit[5177]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7fff731c2c80 a2=0 a3=7fff731c2c6c items=0 ppid=2939 pid=5177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:15.416000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:21:15.469000 audit[5177]: NETFILTER_CFG table=nat:145 family=2 entries=20 op=nft_register_rule pid=5177 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:21:15.469000 audit[5177]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff731c2c80 a2=0 a3=0 items=0 ppid=2939 pid=5177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:15.469000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:21:15.494000 audit[5185]: NETFILTER_CFG table=filter:146 family=2 entries=38 op=nft_register_rule pid=5185 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:21:15.494000 audit[5185]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7fff2f411390 a2=0 a3=7fff2f41137c items=0 ppid=2939 pid=5185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:15.494000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:21:15.500000 audit[5185]: NETFILTER_CFG table=nat:147 family=2 entries=20 op=nft_register_rule pid=5185 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:21:15.500000 audit[5185]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff2f411390 a2=0 a3=0 items=0 ppid=2939 pid=5185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:15.500000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:21:15.504000 audit[5181]: USER_ACCT pid=5181 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:15.505261 sshd[5181]: Accepted publickey for core from 147.75.109.163 port 34568 ssh2: RSA SHA256:AH9sdAXiVqzWQffvo8h2BLk1KGxpD1YCntXC072RVDo Dec 12 18:21:15.506000 audit[5181]: CRED_ACQ pid=5181 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:15.507000 audit[5181]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff2d80f220 a2=3 a3=0 items=0 ppid=1 pid=5181 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:15.507000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:21:15.508396 sshd-session[5181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:21:15.517738 systemd-logind[1590]: New session 22 of user core. Dec 12 18:21:15.525161 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 12 18:21:15.530000 audit[5181]: USER_START pid=5181 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:15.533000 audit[5186]: CRED_ACQ pid=5186 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:16.146515 sshd[5186]: Connection closed by 147.75.109.163 port 34568 Dec 12 18:21:16.145557 sshd-session[5181]: pam_unix(sshd:session): session closed for user core Dec 12 18:21:16.150000 audit[5181]: USER_END pid=5181 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:16.150000 audit[5181]: CRED_DISP pid=5181 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:16.165350 systemd[1]: sshd@21-64.23.253.31:22-147.75.109.163:34568.service: Deactivated successfully. Dec 12 18:21:16.166000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-64.23.253.31:22-147.75.109.163:34568 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:21:16.172287 systemd[1]: session-22.scope: Deactivated successfully. Dec 12 18:21:16.175474 systemd-logind[1590]: Session 22 logged out. Waiting for processes to exit. Dec 12 18:21:16.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-64.23.253.31:22-147.75.109.163:34582 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:21:16.182372 systemd[1]: Started sshd@22-64.23.253.31:22-147.75.109.163:34582.service - OpenSSH per-connection server daemon (147.75.109.163:34582). Dec 12 18:21:16.185882 systemd-logind[1590]: Removed session 22. Dec 12 18:21:16.292000 audit[5197]: USER_ACCT pid=5197 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:16.293115 sshd[5197]: Accepted publickey for core from 147.75.109.163 port 34582 ssh2: RSA SHA256:AH9sdAXiVqzWQffvo8h2BLk1KGxpD1YCntXC072RVDo Dec 12 18:21:16.294000 audit[5197]: CRED_ACQ pid=5197 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:16.294000 audit[5197]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd8957f4a0 a2=3 a3=0 items=0 ppid=1 pid=5197 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:16.294000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:21:16.295363 sshd-session[5197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:21:16.305398 systemd-logind[1590]: New session 23 of user core. Dec 12 18:21:16.316896 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 12 18:21:16.322000 audit[5197]: USER_START pid=5197 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:16.326000 audit[5200]: CRED_ACQ pid=5200 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:16.459242 sshd[5200]: Connection closed by 147.75.109.163 port 34582 Dec 12 18:21:16.459943 sshd-session[5197]: pam_unix(sshd:session): session closed for user core Dec 12 18:21:16.464000 audit[5197]: USER_END pid=5197 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:16.464000 audit[5197]: CRED_DISP pid=5197 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:16.469007 systemd-logind[1590]: Session 23 logged out. Waiting for processes to exit. Dec 12 18:21:16.469779 systemd[1]: sshd@22-64.23.253.31:22-147.75.109.163:34582.service: Deactivated successfully. Dec 12 18:21:16.469000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-64.23.253.31:22-147.75.109.163:34582 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:21:16.473416 systemd[1]: session-23.scope: Deactivated successfully. Dec 12 18:21:16.477078 systemd-logind[1590]: Removed session 23. Dec 12 18:21:17.129560 kubelet[2819]: E1212 18:21:17.129420 2819 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5kbmx" podUID="3530bcd5-7985-42ba-8587-569180a87a41" Dec 12 18:21:18.129569 kubelet[2819]: E1212 18:21:18.129079 2819 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9bb959468-57r44" podUID="ab0029a5-8491-42f1-b060-fef0c0422b49" Dec 12 18:21:19.128598 kubelet[2819]: E1212 18:21:19.128370 2819 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f9c8d5fbb-p96pq" podUID="ed1bf1c8-646f-4c33-9642-90a577c1d786" Dec 12 18:21:21.481496 kernel: kauditd_printk_skb: 57 callbacks suppressed Dec 12 18:21:21.481665 kernel: audit: type=1130 audit(1765563681.479:881): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-64.23.253.31:22-147.75.109.163:34596 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:21:21.479000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-64.23.253.31:22-147.75.109.163:34596 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:21:21.479733 systemd[1]: Started sshd@23-64.23.253.31:22-147.75.109.163:34596.service - OpenSSH per-connection server daemon (147.75.109.163:34596). Dec 12 18:21:21.561000 audit[5212]: USER_ACCT pid=5212 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:21.562166 sshd[5212]: Accepted publickey for core from 147.75.109.163 port 34596 ssh2: RSA SHA256:AH9sdAXiVqzWQffvo8h2BLk1KGxpD1YCntXC072RVDo Dec 12 18:21:21.565104 sshd-session[5212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:21:21.569530 kernel: audit: type=1101 audit(1765563681.561:882): pid=5212 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:21.563000 audit[5212]: CRED_ACQ pid=5212 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:21.578167 kernel: audit: type=1103 audit(1765563681.563:883): pid=5212 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:21.578288 kernel: audit: type=1006 audit(1765563681.563:884): pid=5212 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Dec 12 18:21:21.563000 audit[5212]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd53c94e20 a2=3 a3=0 items=0 ppid=1 pid=5212 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:21.582919 kernel: audit: type=1300 audit(1765563681.563:884): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd53c94e20 a2=3 a3=0 items=0 ppid=1 pid=5212 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:21.586067 systemd-logind[1590]: New session 24 of user core. Dec 12 18:21:21.563000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:21:21.591518 kernel: audit: type=1327 audit(1765563681.563:884): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:21:21.596779 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 12 18:21:21.603000 audit[5212]: USER_START pid=5212 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:21.612559 kernel: audit: type=1105 audit(1765563681.603:885): pid=5212 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:21.607000 audit[5215]: CRED_ACQ pid=5215 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:21.619546 kernel: audit: type=1103 audit(1765563681.607:886): pid=5215 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:21.740586 sshd[5215]: Connection closed by 147.75.109.163 port 34596 Dec 12 18:21:21.743259 sshd-session[5212]: pam_unix(sshd:session): session closed for user core Dec 12 18:21:21.745000 audit[5212]: USER_END pid=5212 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:21.751080 systemd[1]: sshd@23-64.23.253.31:22-147.75.109.163:34596.service: Deactivated successfully. Dec 12 18:21:21.753546 kernel: audit: type=1106 audit(1765563681.745:887): pid=5212 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:21.746000 audit[5212]: CRED_DISP pid=5212 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:21.756805 systemd[1]: session-24.scope: Deactivated successfully. Dec 12 18:21:21.761039 systemd-logind[1590]: Session 24 logged out. Waiting for processes to exit. Dec 12 18:21:21.761886 kernel: audit: type=1104 audit(1765563681.746:888): pid=5212 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:21.752000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-64.23.253.31:22-147.75.109.163:34596 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:21:21.765192 systemd-logind[1590]: Removed session 24. Dec 12 18:21:22.188000 audit[5229]: NETFILTER_CFG table=filter:148 family=2 entries=26 op=nft_register_rule pid=5229 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:21:22.188000 audit[5229]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffdb1ddc460 a2=0 a3=7ffdb1ddc44c items=0 ppid=2939 pid=5229 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:22.188000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:21:22.199000 audit[5229]: NETFILTER_CFG table=nat:149 family=2 entries=104 op=nft_register_chain pid=5229 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:21:22.199000 audit[5229]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffdb1ddc460 a2=0 a3=7ffdb1ddc44c items=0 ppid=2939 pid=5229 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:22.199000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:21:23.127983 kubelet[2819]: E1212 18:21:23.127927 2819 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9bb959468-v58pb" podUID="f4c646c7-47f1-433d-b7c4-005cccecda6a" Dec 12 18:21:24.128561 kubelet[2819]: E1212 18:21:24.127807 2819 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-h7lvc" podUID="c57efa3a-e82c-436b-9c07-8cf6921dcd5d" Dec 12 18:21:26.130260 kubelet[2819]: E1212 18:21:26.130052 2819 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7fcbf96c45-vldxn" podUID="88464bd3-9403-4901-97b2-3cffb941f328" Dec 12 18:21:26.133558 kubelet[2819]: E1212 18:21:26.132384 2819 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7565c6cc-lrgtt" podUID="f2c6d001-1096-4786-820b-c2f7a945bcac" Dec 12 18:21:26.768073 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 12 18:21:26.768301 kernel: audit: type=1130 audit(1765563686.763:892): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-64.23.253.31:22-147.75.109.163:50718 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:21:26.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-64.23.253.31:22-147.75.109.163:50718 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:21:26.763320 systemd[1]: Started sshd@24-64.23.253.31:22-147.75.109.163:50718.service - OpenSSH per-connection server daemon (147.75.109.163:50718). Dec 12 18:21:26.860000 audit[5231]: USER_ACCT pid=5231 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:26.860880 sshd[5231]: Accepted publickey for core from 147.75.109.163 port 50718 ssh2: RSA SHA256:AH9sdAXiVqzWQffvo8h2BLk1KGxpD1YCntXC072RVDo Dec 12 18:21:26.869581 kernel: audit: type=1101 audit(1765563686.860:893): pid=5231 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:26.869000 audit[5231]: CRED_ACQ pid=5231 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:26.871463 sshd-session[5231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:21:26.877512 kernel: audit: type=1103 audit(1765563686.869:894): pid=5231 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:26.870000 audit[5231]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc4f4466e0 a2=3 a3=0 items=0 ppid=1 pid=5231 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:26.886181 kernel: audit: type=1006 audit(1765563686.870:895): pid=5231 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Dec 12 18:21:26.886301 kernel: audit: type=1300 audit(1765563686.870:895): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc4f4466e0 a2=3 a3=0 items=0 ppid=1 pid=5231 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:26.886414 systemd-logind[1590]: New session 25 of user core. Dec 12 18:21:26.870000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:21:26.892563 kernel: audit: type=1327 audit(1765563686.870:895): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:21:26.894204 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 12 18:21:26.899000 audit[5231]: USER_START pid=5231 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:26.914878 kernel: audit: type=1105 audit(1765563686.899:896): pid=5231 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:26.915043 kernel: audit: type=1103 audit(1765563686.904:897): pid=5234 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:26.904000 audit[5234]: CRED_ACQ pid=5234 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:27.057242 sshd[5234]: Connection closed by 147.75.109.163 port 50718 Dec 12 18:21:27.057771 sshd-session[5231]: pam_unix(sshd:session): session closed for user core Dec 12 18:21:27.060000 audit[5231]: USER_END pid=5231 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:27.064788 systemd[1]: sshd@24-64.23.253.31:22-147.75.109.163:50718.service: Deactivated successfully. Dec 12 18:21:27.069924 kernel: audit: type=1106 audit(1765563687.060:898): pid=5231 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:27.067916 systemd[1]: session-25.scope: Deactivated successfully. Dec 12 18:21:27.060000 audit[5231]: CRED_DISP pid=5231 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:27.079540 kernel: audit: type=1104 audit(1765563687.060:899): pid=5231 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:27.079468 systemd-logind[1590]: Session 25 logged out. Waiting for processes to exit. Dec 12 18:21:27.062000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-64.23.253.31:22-147.75.109.163:50718 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:21:27.083325 systemd-logind[1590]: Removed session 25. Dec 12 18:21:29.128027 kubelet[2819]: E1212 18:21:29.127962 2819 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9bb959468-57r44" podUID="ab0029a5-8491-42f1-b060-fef0c0422b49" Dec 12 18:21:29.129521 kubelet[2819]: E1212 18:21:29.129443 2819 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5kbmx" podUID="3530bcd5-7985-42ba-8587-569180a87a41" Dec 12 18:21:30.129909 kubelet[2819]: E1212 18:21:30.126735 2819 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 12 18:21:30.129909 kubelet[2819]: E1212 18:21:30.129372 2819 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f9c8d5fbb-p96pq" podUID="ed1bf1c8-646f-4c33-9642-90a577c1d786" Dec 12 18:21:32.081711 systemd[1]: Started sshd@25-64.23.253.31:22-147.75.109.163:50722.service - OpenSSH per-connection server daemon (147.75.109.163:50722). Dec 12 18:21:32.096037 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 18:21:32.096201 kernel: audit: type=1130 audit(1765563692.080:901): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-64.23.253.31:22-147.75.109.163:50722 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:21:32.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-64.23.253.31:22-147.75.109.163:50722 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:21:32.251051 sshd[5254]: Accepted publickey for core from 147.75.109.163 port 50722 ssh2: RSA SHA256:AH9sdAXiVqzWQffvo8h2BLk1KGxpD1YCntXC072RVDo Dec 12 18:21:32.261133 kernel: audit: type=1101 audit(1765563692.249:902): pid=5254 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:32.249000 audit[5254]: USER_ACCT pid=5254 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:32.259000 audit[5254]: CRED_ACQ pid=5254 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:32.271567 kernel: audit: type=1103 audit(1765563692.259:903): pid=5254 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:32.272236 sshd-session[5254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:21:32.259000 audit[5254]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd3c937880 a2=3 a3=0 items=0 ppid=1 pid=5254 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:32.285072 kernel: audit: type=1006 audit(1765563692.259:904): pid=5254 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Dec 12 18:21:32.285242 kernel: audit: type=1300 audit(1765563692.259:904): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd3c937880 a2=3 a3=0 items=0 ppid=1 pid=5254 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:32.259000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:21:32.292502 kernel: audit: type=1327 audit(1765563692.259:904): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:21:32.310908 systemd-logind[1590]: New session 26 of user core. Dec 12 18:21:32.316067 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 12 18:21:32.322000 audit[5254]: USER_START pid=5254 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:32.334161 kernel: audit: type=1105 audit(1765563692.322:905): pid=5254 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:32.337000 audit[5257]: CRED_ACQ pid=5257 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:32.348523 kernel: audit: type=1103 audit(1765563692.337:906): pid=5257 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:32.720927 sshd[5257]: Connection closed by 147.75.109.163 port 50722 Dec 12 18:21:32.721689 sshd-session[5254]: pam_unix(sshd:session): session closed for user core Dec 12 18:21:32.723000 audit[5254]: USER_END pid=5254 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:32.732024 systemd[1]: sshd@25-64.23.253.31:22-147.75.109.163:50722.service: Deactivated successfully. Dec 12 18:21:32.723000 audit[5254]: CRED_DISP pid=5254 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:32.736220 kernel: audit: type=1106 audit(1765563692.723:907): pid=5254 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:32.736329 kernel: audit: type=1104 audit(1765563692.723:908): pid=5254 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:32.739436 systemd[1]: session-26.scope: Deactivated successfully. Dec 12 18:21:32.742220 systemd-logind[1590]: Session 26 logged out. Waiting for processes to exit. Dec 12 18:21:32.732000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-64.23.253.31:22-147.75.109.163:50722 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:21:32.747147 systemd-logind[1590]: Removed session 26. Dec 12 18:21:34.127517 kubelet[2819]: E1212 18:21:34.126368 2819 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 12 18:21:37.126550 kubelet[2819]: E1212 18:21:37.126421 2819 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 12 18:21:37.128193 kubelet[2819]: E1212 18:21:37.128149 2819 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-h7lvc" podUID="c57efa3a-e82c-436b-9c07-8cf6921dcd5d" Dec 12 18:21:37.751185 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 18:21:37.752342 kernel: audit: type=1130 audit(1765563697.738:910): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-64.23.253.31:22-147.75.109.163:48440 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:21:37.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-64.23.253.31:22-147.75.109.163:48440 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:21:37.739827 systemd[1]: Started sshd@26-64.23.253.31:22-147.75.109.163:48440.service - OpenSSH per-connection server daemon (147.75.109.163:48440). Dec 12 18:21:37.878000 audit[5271]: USER_ACCT pid=5271 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:37.887611 kernel: audit: type=1101 audit(1765563697.878:911): pid=5271 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:37.888517 sshd[5271]: Accepted publickey for core from 147.75.109.163 port 48440 ssh2: RSA SHA256:AH9sdAXiVqzWQffvo8h2BLk1KGxpD1YCntXC072RVDo Dec 12 18:21:37.891285 sshd-session[5271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:21:37.889000 audit[5271]: CRED_ACQ pid=5271 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:37.900529 kernel: audit: type=1103 audit(1765563697.889:912): pid=5271 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:37.907621 kernel: audit: type=1006 audit(1765563697.889:913): pid=5271 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Dec 12 18:21:37.908340 kernel: audit: type=1300 audit(1765563697.889:913): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc5e741ff0 a2=3 a3=0 items=0 ppid=1 pid=5271 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:37.889000 audit[5271]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc5e741ff0 a2=3 a3=0 items=0 ppid=1 pid=5271 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:21:37.889000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:21:37.918507 kernel: audit: type=1327 audit(1765563697.889:913): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:21:37.924322 systemd-logind[1590]: New session 27 of user core. Dec 12 18:21:37.927811 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 12 18:21:37.934000 audit[5271]: USER_START pid=5271 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:37.946528 kernel: audit: type=1105 audit(1765563697.934:914): pid=5271 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:37.945000 audit[5274]: CRED_ACQ pid=5274 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:37.954512 kernel: audit: type=1103 audit(1765563697.945:915): pid=5274 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:38.130865 containerd[1616]: time="2025-12-12T18:21:38.128282688Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 18:21:38.239570 sshd[5274]: Connection closed by 147.75.109.163 port 48440 Dec 12 18:21:38.239693 sshd-session[5271]: pam_unix(sshd:session): session closed for user core Dec 12 18:21:38.242000 audit[5271]: USER_END pid=5271 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:38.253528 kernel: audit: type=1106 audit(1765563698.242:916): pid=5271 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:38.254643 systemd[1]: sshd@26-64.23.253.31:22-147.75.109.163:48440.service: Deactivated successfully. Dec 12 18:21:38.259289 systemd[1]: session-27.scope: Deactivated successfully. Dec 12 18:21:38.242000 audit[5271]: CRED_DISP pid=5271 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:38.268720 kernel: audit: type=1104 audit(1765563698.242:917): pid=5271 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 12 18:21:38.270107 systemd-logind[1590]: Session 27 logged out. Waiting for processes to exit. Dec 12 18:21:38.253000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-64.23.253.31:22-147.75.109.163:48440 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:21:38.272889 systemd-logind[1590]: Removed session 27. Dec 12 18:21:38.459656 containerd[1616]: time="2025-12-12T18:21:38.459515482Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:21:38.460974 containerd[1616]: time="2025-12-12T18:21:38.460918531Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 18:21:38.461248 containerd[1616]: time="2025-12-12T18:21:38.461018351Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 12 18:21:38.462006 kubelet[2819]: E1212 18:21:38.461188 2819 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:21:38.462006 kubelet[2819]: E1212 18:21:38.461397 2819 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:21:38.462006 kubelet[2819]: E1212 18:21:38.461781 2819 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:4fe5f617f54d4643bcb5bae7103038b0,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fhrn4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7565c6cc-lrgtt_calico-system(f2c6d001-1096-4786-820b-c2f7a945bcac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 18:21:38.464264 containerd[1616]: time="2025-12-12T18:21:38.464220348Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:21:38.811630 containerd[1616]: time="2025-12-12T18:21:38.811499491Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:21:38.813691 containerd[1616]: time="2025-12-12T18:21:38.813509182Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:21:38.813691 containerd[1616]: time="2025-12-12T18:21:38.813647845Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 18:21:38.815649 kubelet[2819]: E1212 18:21:38.815580 2819 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:21:38.815808 kubelet[2819]: E1212 18:21:38.815658 2819 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:21:38.816005 kubelet[2819]: E1212 18:21:38.815933 2819 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s99m7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-9bb959468-v58pb_calico-apiserver(f4c646c7-47f1-433d-b7c4-005cccecda6a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:21:38.817107 containerd[1616]: time="2025-12-12T18:21:38.816977635Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 18:21:38.817233 kubelet[2819]: E1212 18:21:38.817055 2819 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9bb959468-v58pb" podUID="f4c646c7-47f1-433d-b7c4-005cccecda6a" Dec 12 18:21:39.150137 containerd[1616]: time="2025-12-12T18:21:39.149919042Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:21:39.153346 containerd[1616]: time="2025-12-12T18:21:39.153177051Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 18:21:39.153346 containerd[1616]: time="2025-12-12T18:21:39.153214791Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 12 18:21:39.153923 kubelet[2819]: E1212 18:21:39.153848 2819 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:21:39.154296 kubelet[2819]: E1212 18:21:39.154048 2819 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:21:39.155964 kubelet[2819]: E1212 18:21:39.155774 2819 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fhrn4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7565c6cc-lrgtt_calico-system(f2c6d001-1096-4786-820b-c2f7a945bcac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 18:21:39.157209 kubelet[2819]: E1212 18:21:39.157146 2819 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7565c6cc-lrgtt" podUID="f2c6d001-1096-4786-820b-c2f7a945bcac" Dec 12 18:21:40.129625 kubelet[2819]: E1212 18:21:40.129092 2819 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7fcbf96c45-vldxn" podUID="88464bd3-9403-4901-97b2-3cffb941f328" Dec 12 18:21:40.135499 kubelet[2819]: E1212 18:21:40.135359 2819 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5kbmx" podUID="3530bcd5-7985-42ba-8587-569180a87a41"