Dec 16 03:27:09.024616 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Dec 16 00:18:19 -00 2025 Dec 16 03:27:09.024664 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=553464fdb0286a5b06b399da29ca659e521c68f08ea70a931c96ddffd00b4357 Dec 16 03:27:09.024689 kernel: BIOS-provided physical RAM map: Dec 16 03:27:09.024702 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 16 03:27:09.024714 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 16 03:27:09.024726 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 16 03:27:09.024741 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Dec 16 03:27:09.024770 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Dec 16 03:27:09.024783 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 16 03:27:09.024796 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 16 03:27:09.024808 kernel: NX (Execute Disable) protection: active Dec 16 03:27:09.024830 kernel: APIC: Static calls initialized Dec 16 03:27:09.027800 kernel: SMBIOS 2.8 present. Dec 16 03:27:09.027828 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Dec 16 03:27:09.027843 kernel: DMI: Memory slots populated: 1/1 Dec 16 03:27:09.027853 kernel: Hypervisor detected: KVM Dec 16 03:27:09.027883 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Dec 16 03:27:09.027892 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 16 03:27:09.027900 kernel: kvm-clock: using sched offset of 4336926140 cycles Dec 16 03:27:09.027910 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 16 03:27:09.027919 kernel: tsc: Detected 2494.140 MHz processor Dec 16 03:27:09.027928 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 16 03:27:09.027938 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 16 03:27:09.027952 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Dec 16 03:27:09.027961 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 16 03:27:09.027970 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 16 03:27:09.027979 kernel: ACPI: Early table checksum verification disabled Dec 16 03:27:09.027988 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Dec 16 03:27:09.027997 kernel: ACPI: RSDT 0x000000007FFE19FD 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 03:27:09.028005 kernel: ACPI: FACP 0x000000007FFE17E1 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 03:27:09.028014 kernel: ACPI: DSDT 0x000000007FFE0040 0017A1 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 03:27:09.028028 kernel: ACPI: FACS 0x000000007FFE0000 000040 Dec 16 03:27:09.028037 kernel: ACPI: APIC 0x000000007FFE1855 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 03:27:09.028046 kernel: ACPI: HPET 0x000000007FFE18D5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 03:27:09.028054 kernel: ACPI: SRAT 0x000000007FFE190D 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 03:27:09.028063 kernel: ACPI: WAET 0x000000007FFE19D5 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 03:27:09.028073 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe17e1-0x7ffe1854] Dec 16 03:27:09.028087 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe17e0] Dec 16 03:27:09.028106 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Dec 16 03:27:09.028115 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe1855-0x7ffe18d4] Dec 16 03:27:09.028131 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe18d5-0x7ffe190c] Dec 16 03:27:09.028146 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe190d-0x7ffe19d4] Dec 16 03:27:09.028160 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe19d5-0x7ffe19fc] Dec 16 03:27:09.028178 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 16 03:27:09.028188 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Dec 16 03:27:09.028197 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00001000-0x7ffdafff] Dec 16 03:27:09.028206 kernel: NODE_DATA(0) allocated [mem 0x7ffd3dc0-0x7ffdafff] Dec 16 03:27:09.028216 kernel: Zone ranges: Dec 16 03:27:09.028225 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 16 03:27:09.028239 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Dec 16 03:27:09.028248 kernel: Normal empty Dec 16 03:27:09.028257 kernel: Device empty Dec 16 03:27:09.028268 kernel: Movable zone start for each node Dec 16 03:27:09.028283 kernel: Early memory node ranges Dec 16 03:27:09.028296 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 16 03:27:09.028309 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Dec 16 03:27:09.028322 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Dec 16 03:27:09.028345 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 16 03:27:09.028359 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 16 03:27:09.028372 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Dec 16 03:27:09.028386 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 16 03:27:09.028399 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 16 03:27:09.028409 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 16 03:27:09.028420 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 16 03:27:09.028436 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 16 03:27:09.028445 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 16 03:27:09.028457 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 16 03:27:09.028467 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 16 03:27:09.028476 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 16 03:27:09.028485 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 16 03:27:09.028494 kernel: TSC deadline timer available Dec 16 03:27:09.028510 kernel: CPU topo: Max. logical packages: 1 Dec 16 03:27:09.028519 kernel: CPU topo: Max. logical dies: 1 Dec 16 03:27:09.028528 kernel: CPU topo: Max. dies per package: 1 Dec 16 03:27:09.028537 kernel: CPU topo: Max. threads per core: 1 Dec 16 03:27:09.028546 kernel: CPU topo: Num. cores per package: 2 Dec 16 03:27:09.028555 kernel: CPU topo: Num. threads per package: 2 Dec 16 03:27:09.028564 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Dec 16 03:27:09.028574 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 16 03:27:09.028588 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Dec 16 03:27:09.028597 kernel: Booting paravirtualized kernel on KVM Dec 16 03:27:09.028606 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 16 03:27:09.028616 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 16 03:27:09.028625 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Dec 16 03:27:09.028634 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Dec 16 03:27:09.028643 kernel: pcpu-alloc: [0] 0 1 Dec 16 03:27:09.028657 kernel: kvm-guest: PV spinlocks disabled, no host support Dec 16 03:27:09.028668 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=553464fdb0286a5b06b399da29ca659e521c68f08ea70a931c96ddffd00b4357 Dec 16 03:27:09.028677 kernel: random: crng init done Dec 16 03:27:09.028686 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 16 03:27:09.028695 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 16 03:27:09.028704 kernel: Fallback order for Node 0: 0 Dec 16 03:27:09.028719 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524153 Dec 16 03:27:09.028728 kernel: Policy zone: DMA32 Dec 16 03:27:09.028737 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 03:27:09.028775 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 16 03:27:09.028785 kernel: Kernel/User page tables isolation: enabled Dec 16 03:27:09.028794 kernel: ftrace: allocating 40103 entries in 157 pages Dec 16 03:27:09.028803 kernel: ftrace: allocated 157 pages with 5 groups Dec 16 03:27:09.028812 kernel: Dynamic Preempt: voluntary Dec 16 03:27:09.028829 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 03:27:09.028840 kernel: rcu: RCU event tracing is enabled. Dec 16 03:27:09.028855 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 16 03:27:09.028869 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 03:27:09.028883 kernel: Rude variant of Tasks RCU enabled. Dec 16 03:27:09.028897 kernel: Tracing variant of Tasks RCU enabled. Dec 16 03:27:09.028911 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 03:27:09.028933 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 16 03:27:09.028942 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 03:27:09.028955 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 03:27:09.028964 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 03:27:09.028973 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 16 03:27:09.028983 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 03:27:09.028998 kernel: Console: colour VGA+ 80x25 Dec 16 03:27:09.029019 kernel: printk: legacy console [tty0] enabled Dec 16 03:27:09.029032 kernel: printk: legacy console [ttyS0] enabled Dec 16 03:27:09.029045 kernel: ACPI: Core revision 20240827 Dec 16 03:27:09.029060 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 16 03:27:09.029091 kernel: APIC: Switch to symmetric I/O mode setup Dec 16 03:27:09.029106 kernel: x2apic enabled Dec 16 03:27:09.029115 kernel: APIC: Switched APIC routing to: physical x2apic Dec 16 03:27:09.029125 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 16 03:27:09.029135 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Dec 16 03:27:09.029147 kernel: Calibrating delay loop (skipped) preset value.. 4988.28 BogoMIPS (lpj=2494140) Dec 16 03:27:09.029162 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Dec 16 03:27:09.029172 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Dec 16 03:27:09.029201 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 16 03:27:09.029220 kernel: Spectre V2 : Mitigation: Retpolines Dec 16 03:27:09.029230 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Dec 16 03:27:09.029239 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Dec 16 03:27:09.029249 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 16 03:27:09.029259 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 16 03:27:09.029269 kernel: MDS: Mitigation: Clear CPU buffers Dec 16 03:27:09.029278 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 16 03:27:09.029293 kernel: active return thunk: its_return_thunk Dec 16 03:27:09.029303 kernel: ITS: Mitigation: Aligned branch/return thunks Dec 16 03:27:09.029313 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 16 03:27:09.029323 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 16 03:27:09.029332 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 16 03:27:09.029342 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 16 03:27:09.029352 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 16 03:27:09.029367 kernel: Freeing SMP alternatives memory: 32K Dec 16 03:27:09.029377 kernel: pid_max: default: 32768 minimum: 301 Dec 16 03:27:09.029386 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 16 03:27:09.029396 kernel: landlock: Up and running. Dec 16 03:27:09.029406 kernel: SELinux: Initializing. Dec 16 03:27:09.029415 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 16 03:27:09.029425 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 16 03:27:09.029434 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Dec 16 03:27:09.029451 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Dec 16 03:27:09.029460 kernel: signal: max sigframe size: 1776 Dec 16 03:27:09.029470 kernel: rcu: Hierarchical SRCU implementation. Dec 16 03:27:09.029480 kernel: rcu: Max phase no-delay instances is 400. Dec 16 03:27:09.029490 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 16 03:27:09.029500 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 16 03:27:09.029509 kernel: smp: Bringing up secondary CPUs ... Dec 16 03:27:09.029528 kernel: smpboot: x86: Booting SMP configuration: Dec 16 03:27:09.029538 kernel: .... node #0, CPUs: #1 Dec 16 03:27:09.029548 kernel: smp: Brought up 1 node, 2 CPUs Dec 16 03:27:09.029557 kernel: smpboot: Total of 2 processors activated (9976.56 BogoMIPS) Dec 16 03:27:09.029568 kernel: Memory: 1983292K/2096612K available (14336K kernel code, 2444K rwdata, 31636K rodata, 15556K init, 2484K bss, 108756K reserved, 0K cma-reserved) Dec 16 03:27:09.029578 kernel: devtmpfs: initialized Dec 16 03:27:09.029587 kernel: x86/mm: Memory block size: 128MB Dec 16 03:27:09.029606 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 03:27:09.029615 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 16 03:27:09.029625 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 03:27:09.029635 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 03:27:09.029644 kernel: audit: initializing netlink subsys (disabled) Dec 16 03:27:09.029654 kernel: audit: type=2000 audit(1765855626.111:1): state=initialized audit_enabled=0 res=1 Dec 16 03:27:09.029663 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 03:27:09.029682 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 16 03:27:09.029691 kernel: cpuidle: using governor menu Dec 16 03:27:09.029701 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 03:27:09.029710 kernel: dca service started, version 1.12.1 Dec 16 03:27:09.029720 kernel: PCI: Using configuration type 1 for base access Dec 16 03:27:09.029730 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 16 03:27:09.029739 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 03:27:09.029770 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 03:27:09.029780 kernel: ACPI: Added _OSI(Module Device) Dec 16 03:27:09.029790 kernel: ACPI: Added _OSI(Processor Device) Dec 16 03:27:09.029799 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 03:27:09.029809 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 16 03:27:09.029819 kernel: ACPI: Interpreter enabled Dec 16 03:27:09.029828 kernel: ACPI: PM: (supports S0 S5) Dec 16 03:27:09.029844 kernel: ACPI: Using IOAPIC for interrupt routing Dec 16 03:27:09.029854 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 16 03:27:09.029864 kernel: PCI: Using E820 reservations for host bridge windows Dec 16 03:27:09.029874 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Dec 16 03:27:09.029883 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 16 03:27:09.030197 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 16 03:27:09.030402 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Dec 16 03:27:09.030556 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Dec 16 03:27:09.030570 kernel: acpiphp: Slot [3] registered Dec 16 03:27:09.030580 kernel: acpiphp: Slot [4] registered Dec 16 03:27:09.030589 kernel: acpiphp: Slot [5] registered Dec 16 03:27:09.030599 kernel: acpiphp: Slot [6] registered Dec 16 03:27:09.030609 kernel: acpiphp: Slot [7] registered Dec 16 03:27:09.030626 kernel: acpiphp: Slot [8] registered Dec 16 03:27:09.030636 kernel: acpiphp: Slot [9] registered Dec 16 03:27:09.030646 kernel: acpiphp: Slot [10] registered Dec 16 03:27:09.030655 kernel: acpiphp: Slot [11] registered Dec 16 03:27:09.030665 kernel: acpiphp: Slot [12] registered Dec 16 03:27:09.030674 kernel: acpiphp: Slot [13] registered Dec 16 03:27:09.030684 kernel: acpiphp: Slot [14] registered Dec 16 03:27:09.030699 kernel: acpiphp: Slot [15] registered Dec 16 03:27:09.030709 kernel: acpiphp: Slot [16] registered Dec 16 03:27:09.030719 kernel: acpiphp: Slot [17] registered Dec 16 03:27:09.030735 kernel: acpiphp: Slot [18] registered Dec 16 03:27:09.032834 kernel: acpiphp: Slot [19] registered Dec 16 03:27:09.032854 kernel: acpiphp: Slot [20] registered Dec 16 03:27:09.032864 kernel: acpiphp: Slot [21] registered Dec 16 03:27:09.032874 kernel: acpiphp: Slot [22] registered Dec 16 03:27:09.032898 kernel: acpiphp: Slot [23] registered Dec 16 03:27:09.032907 kernel: acpiphp: Slot [24] registered Dec 16 03:27:09.032917 kernel: acpiphp: Slot [25] registered Dec 16 03:27:09.032926 kernel: acpiphp: Slot [26] registered Dec 16 03:27:09.032936 kernel: acpiphp: Slot [27] registered Dec 16 03:27:09.032945 kernel: acpiphp: Slot [28] registered Dec 16 03:27:09.032955 kernel: acpiphp: Slot [29] registered Dec 16 03:27:09.032969 kernel: acpiphp: Slot [30] registered Dec 16 03:27:09.032979 kernel: acpiphp: Slot [31] registered Dec 16 03:27:09.032989 kernel: PCI host bridge to bus 0000:00 Dec 16 03:27:09.033274 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 16 03:27:09.033420 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 16 03:27:09.033551 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 16 03:27:09.033765 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Dec 16 03:27:09.033893 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Dec 16 03:27:09.034010 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 16 03:27:09.034191 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Dec 16 03:27:09.034353 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Dec 16 03:27:09.034497 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint Dec 16 03:27:09.034653 kernel: pci 0000:00:01.1: BAR 4 [io 0xc1e0-0xc1ef] Dec 16 03:27:09.036274 kernel: pci 0000:00:01.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Dec 16 03:27:09.036442 kernel: pci 0000:00:01.1: BAR 1 [io 0x03f6]: legacy IDE quirk Dec 16 03:27:09.036576 kernel: pci 0000:00:01.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Dec 16 03:27:09.036707 kernel: pci 0000:00:01.1: BAR 3 [io 0x0376]: legacy IDE quirk Dec 16 03:27:09.036929 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint Dec 16 03:27:09.037067 kernel: pci 0000:00:01.2: BAR 4 [io 0xc180-0xc19f] Dec 16 03:27:09.037223 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Dec 16 03:27:09.037357 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Dec 16 03:27:09.037491 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Dec 16 03:27:09.037634 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Dec 16 03:27:09.037854 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref] Dec 16 03:27:09.037990 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref] Dec 16 03:27:09.038120 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfebf0000-0xfebf0fff] Dec 16 03:27:09.038288 kernel: pci 0000:00:02.0: ROM [mem 0xfebe0000-0xfebeffff pref] Dec 16 03:27:09.038440 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 16 03:27:09.038622 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Dec 16 03:27:09.039854 kernel: pci 0000:00:03.0: BAR 0 [io 0xc1a0-0xc1bf] Dec 16 03:27:09.040039 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebf1000-0xfebf1fff] Dec 16 03:27:09.040174 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref] Dec 16 03:27:09.040360 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Dec 16 03:27:09.040507 kernel: pci 0000:00:04.0: BAR 0 [io 0xc1c0-0xc1df] Dec 16 03:27:09.040656 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebf2000-0xfebf2fff] Dec 16 03:27:09.043675 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref] Dec 16 03:27:09.043890 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Dec 16 03:27:09.044028 kernel: pci 0000:00:05.0: BAR 0 [io 0xc100-0xc13f] Dec 16 03:27:09.044160 kernel: pci 0000:00:05.0: BAR 1 [mem 0xfebf3000-0xfebf3fff] Dec 16 03:27:09.044295 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref] Dec 16 03:27:09.044469 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Dec 16 03:27:09.044604 kernel: pci 0000:00:06.0: BAR 0 [io 0xc000-0xc07f] Dec 16 03:27:09.044739 kernel: pci 0000:00:06.0: BAR 1 [mem 0xfebf4000-0xfebf4fff] Dec 16 03:27:09.044889 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref] Dec 16 03:27:09.045035 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Dec 16 03:27:09.045204 kernel: pci 0000:00:07.0: BAR 0 [io 0xc080-0xc0ff] Dec 16 03:27:09.045389 kernel: pci 0000:00:07.0: BAR 1 [mem 0xfebf5000-0xfebf5fff] Dec 16 03:27:09.045592 kernel: pci 0000:00:07.0: BAR 4 [mem 0xfe814000-0xfe817fff 64bit pref] Dec 16 03:27:09.045878 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint Dec 16 03:27:09.046025 kernel: pci 0000:00:08.0: BAR 0 [io 0xc140-0xc17f] Dec 16 03:27:09.046183 kernel: pci 0000:00:08.0: BAR 4 [mem 0xfe818000-0xfe81bfff 64bit pref] Dec 16 03:27:09.046197 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 16 03:27:09.046207 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 16 03:27:09.046218 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 16 03:27:09.046227 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 16 03:27:09.046237 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 16 03:27:09.046247 kernel: iommu: Default domain type: Translated Dec 16 03:27:09.046264 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 16 03:27:09.046275 kernel: PCI: Using ACPI for IRQ routing Dec 16 03:27:09.046284 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 16 03:27:09.046294 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 16 03:27:09.046304 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Dec 16 03:27:09.046438 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Dec 16 03:27:09.046569 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Dec 16 03:27:09.046706 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 16 03:27:09.046719 kernel: vgaarb: loaded Dec 16 03:27:09.046729 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 16 03:27:09.046739 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 16 03:27:09.046811 kernel: clocksource: Switched to clocksource kvm-clock Dec 16 03:27:09.046821 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 03:27:09.046831 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 03:27:09.046848 kernel: pnp: PnP ACPI init Dec 16 03:27:09.046858 kernel: pnp: PnP ACPI: found 4 devices Dec 16 03:27:09.046868 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 16 03:27:09.046878 kernel: NET: Registered PF_INET protocol family Dec 16 03:27:09.046887 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 16 03:27:09.046897 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 16 03:27:09.046908 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 03:27:09.046922 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 16 03:27:09.046933 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 16 03:27:09.046942 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 16 03:27:09.046952 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 16 03:27:09.046962 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 16 03:27:09.046972 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 03:27:09.046982 kernel: NET: Registered PF_XDP protocol family Dec 16 03:27:09.047127 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 16 03:27:09.047250 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 16 03:27:09.047372 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 16 03:27:09.047495 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Dec 16 03:27:09.047615 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Dec 16 03:27:09.047763 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Dec 16 03:27:09.047903 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 16 03:27:09.047926 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Dec 16 03:27:09.048093 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x720 took 25545 usecs Dec 16 03:27:09.048107 kernel: PCI: CLS 0 bytes, default 64 Dec 16 03:27:09.048118 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 16 03:27:09.048128 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Dec 16 03:27:09.048139 kernel: Initialise system trusted keyrings Dec 16 03:27:09.048158 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 16 03:27:09.048168 kernel: Key type asymmetric registered Dec 16 03:27:09.048178 kernel: Asymmetric key parser 'x509' registered Dec 16 03:27:09.048188 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 16 03:27:09.048198 kernel: io scheduler mq-deadline registered Dec 16 03:27:09.048208 kernel: io scheduler kyber registered Dec 16 03:27:09.048218 kernel: io scheduler bfq registered Dec 16 03:27:09.048227 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 16 03:27:09.048243 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Dec 16 03:27:09.048253 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Dec 16 03:27:09.048262 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Dec 16 03:27:09.048272 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 03:27:09.048282 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 16 03:27:09.048291 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 16 03:27:09.048300 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 16 03:27:09.048316 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 16 03:27:09.048481 kernel: rtc_cmos 00:03: RTC can wake from S4 Dec 16 03:27:09.048503 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 16 03:27:09.048636 kernel: rtc_cmos 00:03: registered as rtc0 Dec 16 03:27:09.048771 kernel: rtc_cmos 00:03: setting system clock to 2025-12-16T03:27:07 UTC (1765855627) Dec 16 03:27:09.048908 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Dec 16 03:27:09.048939 kernel: intel_pstate: CPU model not supported Dec 16 03:27:09.048949 kernel: NET: Registered PF_INET6 protocol family Dec 16 03:27:09.048959 kernel: Segment Routing with IPv6 Dec 16 03:27:09.048969 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 03:27:09.048979 kernel: NET: Registered PF_PACKET protocol family Dec 16 03:27:09.048989 kernel: Key type dns_resolver registered Dec 16 03:27:09.048999 kernel: IPI shorthand broadcast: enabled Dec 16 03:27:09.049015 kernel: sched_clock: Marking stable (1987004544, 174841496)->(2303389590, -141543550) Dec 16 03:27:09.049025 kernel: registered taskstats version 1 Dec 16 03:27:09.049034 kernel: Loading compiled-in X.509 certificates Dec 16 03:27:09.049044 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: aafd1eb27ea805b8231c3bede9210239fae84df8' Dec 16 03:27:09.049054 kernel: Demotion targets for Node 0: null Dec 16 03:27:09.049063 kernel: Key type .fscrypt registered Dec 16 03:27:09.049073 kernel: Key type fscrypt-provisioning registered Dec 16 03:27:09.049116 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 16 03:27:09.049132 kernel: ima: Allocated hash algorithm: sha1 Dec 16 03:27:09.049143 kernel: ima: No architecture policies found Dec 16 03:27:09.049154 kernel: clk: Disabling unused clocks Dec 16 03:27:09.049164 kernel: Freeing unused kernel image (initmem) memory: 15556K Dec 16 03:27:09.049174 kernel: Write protecting the kernel read-only data: 47104k Dec 16 03:27:09.049207 kernel: Freeing unused kernel image (rodata/data gap) memory: 1132K Dec 16 03:27:09.049231 kernel: Run /init as init process Dec 16 03:27:09.049242 kernel: with arguments: Dec 16 03:27:09.049284 kernel: /init Dec 16 03:27:09.049295 kernel: with environment: Dec 16 03:27:09.049305 kernel: HOME=/ Dec 16 03:27:09.049315 kernel: TERM=linux Dec 16 03:27:09.049326 kernel: SCSI subsystem initialized Dec 16 03:27:09.049336 kernel: libata version 3.00 loaded. Dec 16 03:27:09.049525 kernel: ata_piix 0000:00:01.1: version 2.13 Dec 16 03:27:09.049702 kernel: scsi host0: ata_piix Dec 16 03:27:09.049859 kernel: scsi host1: ata_piix Dec 16 03:27:09.049874 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 lpm-pol 0 Dec 16 03:27:09.049884 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 lpm-pol 0 Dec 16 03:27:09.049906 kernel: ACPI: bus type USB registered Dec 16 03:27:09.049916 kernel: usbcore: registered new interface driver usbfs Dec 16 03:27:09.049926 kernel: usbcore: registered new interface driver hub Dec 16 03:27:09.049936 kernel: usbcore: registered new device driver usb Dec 16 03:27:09.050081 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Dec 16 03:27:09.050214 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Dec 16 03:27:09.050344 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Dec 16 03:27:09.050521 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Dec 16 03:27:09.050706 kernel: hub 1-0:1.0: USB hub found Dec 16 03:27:09.050871 kernel: hub 1-0:1.0: 2 ports detected Dec 16 03:27:09.051050 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Dec 16 03:27:09.051211 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Dec 16 03:27:09.051226 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 16 03:27:09.051237 kernel: GPT:16515071 != 125829119 Dec 16 03:27:09.051247 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 16 03:27:09.051257 kernel: GPT:16515071 != 125829119 Dec 16 03:27:09.051275 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 16 03:27:09.051285 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 16 03:27:09.051424 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Dec 16 03:27:09.051584 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Dec 16 03:27:09.051806 kernel: virtio_scsi virtio3: 2/0/0 default/read/poll queues Dec 16 03:27:09.051996 kernel: scsi host2: Virtio SCSI HBA Dec 16 03:27:09.052033 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 03:27:09.052049 kernel: device-mapper: uevent: version 1.0.3 Dec 16 03:27:09.052063 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 16 03:27:09.052078 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Dec 16 03:27:09.052092 kernel: raid6: avx2x4 gen() 16769 MB/s Dec 16 03:27:09.052102 kernel: raid6: avx2x2 gen() 16930 MB/s Dec 16 03:27:09.052112 kernel: raid6: avx2x1 gen() 13012 MB/s Dec 16 03:27:09.052131 kernel: raid6: using algorithm avx2x2 gen() 16930 MB/s Dec 16 03:27:09.052141 kernel: raid6: .... xor() 20280 MB/s, rmw enabled Dec 16 03:27:09.052152 kernel: raid6: using avx2x2 recovery algorithm Dec 16 03:27:09.052161 kernel: xor: automatically using best checksumming function avx Dec 16 03:27:09.052172 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 03:27:09.052182 kernel: BTRFS: device fsid 57a8262f-2900-48ba-a17e-aafbd70d59c7 devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (161) Dec 16 03:27:09.052192 kernel: BTRFS info (device dm-0): first mount of filesystem 57a8262f-2900-48ba-a17e-aafbd70d59c7 Dec 16 03:27:09.052208 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 16 03:27:09.052218 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 03:27:09.052229 kernel: BTRFS info (device dm-0): enabling free space tree Dec 16 03:27:09.052239 kernel: loop: module loaded Dec 16 03:27:09.052249 kernel: loop0: detected capacity change from 0 to 100528 Dec 16 03:27:09.052259 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 03:27:09.052271 systemd[1]: Successfully made /usr/ read-only. Dec 16 03:27:09.052290 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 03:27:09.052301 systemd[1]: Detected virtualization kvm. Dec 16 03:27:09.052312 systemd[1]: Detected architecture x86-64. Dec 16 03:27:09.052322 systemd[1]: Running in initrd. Dec 16 03:27:09.052332 systemd[1]: No hostname configured, using default hostname. Dec 16 03:27:09.052343 systemd[1]: Hostname set to . Dec 16 03:27:09.052359 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 16 03:27:09.052375 systemd[1]: Queued start job for default target initrd.target. Dec 16 03:27:09.052391 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 16 03:27:09.052408 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 03:27:09.052425 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 03:27:09.052444 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 03:27:09.052471 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 03:27:09.052491 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 03:27:09.052510 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 03:27:09.052529 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 03:27:09.052547 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 03:27:09.052567 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 16 03:27:09.052592 systemd[1]: Reached target paths.target - Path Units. Dec 16 03:27:09.052611 systemd[1]: Reached target slices.target - Slice Units. Dec 16 03:27:09.052629 systemd[1]: Reached target swap.target - Swaps. Dec 16 03:27:09.052646 systemd[1]: Reached target timers.target - Timer Units. Dec 16 03:27:09.052662 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 03:27:09.052678 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 03:27:09.052689 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 16 03:27:09.052707 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 03:27:09.052718 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 16 03:27:09.052729 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 03:27:09.052740 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 03:27:09.052761 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 03:27:09.052777 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 03:27:09.052797 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 03:27:09.052813 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 03:27:09.052828 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 03:27:09.052842 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 03:27:09.052854 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 16 03:27:09.052865 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 03:27:09.052875 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 03:27:09.052893 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 03:27:09.052905 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 03:27:09.052919 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 03:27:09.052936 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 03:27:09.052960 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 03:27:09.052977 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 03:27:09.053050 systemd-journald[296]: Collecting audit messages is enabled. Dec 16 03:27:09.053093 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 03:27:09.053105 systemd-journald[296]: Journal started Dec 16 03:27:09.053146 systemd-journald[296]: Runtime Journal (/run/log/journal/ad16f4dd228947d0864fcb85ce1e0d54) is 4.8M, max 39.1M, 34.2M free. Dec 16 03:27:09.056397 kernel: audit: type=1130 audit(1765855629.053:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:09.056455 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 03:27:09.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:09.067786 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 03:27:09.067000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:09.072353 kernel: audit: type=1130 audit(1765855629.067:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:09.077995 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 03:27:09.131434 kernel: Bridge firewalling registered Dec 16 03:27:09.080255 systemd-modules-load[299]: Inserted module 'br_netfilter' Dec 16 03:27:09.132504 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 03:27:09.136874 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 03:27:09.142691 kernel: audit: type=1130 audit(1765855629.136:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:09.136000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:09.141879 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 03:27:09.155937 kernel: audit: type=1130 audit(1765855629.142:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:09.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:09.147150 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 03:27:09.157111 systemd-tmpfiles[314]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 16 03:27:09.159962 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 03:27:09.166000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:09.167001 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 03:27:09.172505 kernel: audit: type=1130 audit(1765855629.166:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:09.177983 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 03:27:09.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:09.183818 kernel: audit: type=1130 audit(1765855629.177:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:09.194072 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 03:27:09.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:09.199784 kernel: audit: type=1130 audit(1765855629.194:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:09.200060 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 03:27:09.205058 kernel: audit: type=1130 audit(1765855629.199:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:09.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:09.204953 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 03:27:09.206000 audit: BPF prog-id=6 op=LOAD Dec 16 03:27:09.209768 kernel: audit: type=1334 audit(1765855629.206:10): prog-id=6 op=LOAD Dec 16 03:27:09.209971 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 03:27:09.232268 dracut-cmdline[336]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=553464fdb0286a5b06b399da29ca659e521c68f08ea70a931c96ddffd00b4357 Dec 16 03:27:09.273110 systemd-resolved[337]: Positive Trust Anchors: Dec 16 03:27:09.274162 systemd-resolved[337]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 03:27:09.274173 systemd-resolved[337]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 16 03:27:09.274227 systemd-resolved[337]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 03:27:09.312744 systemd-resolved[337]: Defaulting to hostname 'linux'. Dec 16 03:27:09.313959 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 03:27:09.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:09.314535 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 03:27:09.372780 kernel: Loading iSCSI transport class v2.0-870. Dec 16 03:27:09.390794 kernel: iscsi: registered transport (tcp) Dec 16 03:27:09.419798 kernel: iscsi: registered transport (qla4xxx) Dec 16 03:27:09.419924 kernel: QLogic iSCSI HBA Driver Dec 16 03:27:09.455438 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 03:27:09.480069 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 03:27:09.480000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:09.483155 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 03:27:09.548272 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 03:27:09.553000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:09.556184 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 03:27:09.557650 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 03:27:09.607361 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 03:27:09.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:09.607000 audit: BPF prog-id=7 op=LOAD Dec 16 03:27:09.607000 audit: BPF prog-id=8 op=LOAD Dec 16 03:27:09.609941 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 03:27:09.642675 systemd-udevd[576]: Using default interface naming scheme 'v257'. Dec 16 03:27:09.660294 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 03:27:09.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:09.664463 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 03:27:09.708672 dracut-pre-trigger[643]: rd.md=0: removing MD RAID activation Dec 16 03:27:09.710280 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 03:27:09.711000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:09.715000 audit: BPF prog-id=9 op=LOAD Dec 16 03:27:09.717606 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 03:27:09.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:09.755885 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 03:27:09.760091 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 03:27:09.810029 systemd-networkd[686]: lo: Link UP Dec 16 03:27:09.811404 systemd-networkd[686]: lo: Gained carrier Dec 16 03:27:09.813825 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 03:27:09.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:09.816487 systemd[1]: Reached target network.target - Network. Dec 16 03:27:09.889854 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 03:27:09.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:09.894411 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 03:27:10.058504 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 16 03:27:10.084541 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 16 03:27:10.114902 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 16 03:27:10.124773 kernel: cryptd: max_cpu_qlen set to 1000 Dec 16 03:27:10.132783 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Dec 16 03:27:10.135134 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 16 03:27:10.139328 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 03:27:10.181553 systemd-networkd[686]: eth1: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 03:27:10.181572 systemd-networkd[686]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 03:27:10.185736 systemd-networkd[686]: eth1: Link UP Dec 16 03:27:10.187199 systemd-networkd[686]: eth1: Gained carrier Dec 16 03:27:10.187221 systemd-networkd[686]: eth1: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 03:27:10.194132 disk-uuid[745]: Primary Header is updated. Dec 16 03:27:10.194132 disk-uuid[745]: Secondary Entries is updated. Dec 16 03:27:10.194132 disk-uuid[745]: Secondary Header is updated. Dec 16 03:27:10.201338 systemd-networkd[686]: eth1: DHCPv4 address 10.124.0.34/20 acquired from 169.254.169.253 Dec 16 03:27:10.202092 kernel: AES CTR mode by8 optimization enabled Dec 16 03:27:10.234537 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 03:27:10.235914 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 03:27:10.238000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:10.239058 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 03:27:10.245179 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 03:27:10.264347 systemd-networkd[686]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/yy-digitalocean.network Dec 16 03:27:10.264362 systemd-networkd[686]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Dec 16 03:27:10.266269 systemd-networkd[686]: eth0: Link UP Dec 16 03:27:10.266638 systemd-networkd[686]: eth0: Gained carrier Dec 16 03:27:10.266660 systemd-networkd[686]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/yy-digitalocean.network Dec 16 03:27:10.276903 systemd-networkd[686]: eth0: DHCPv4 address 144.126.212.19/20, gateway 144.126.208.1 acquired from 169.254.169.253 Dec 16 03:27:10.410000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:10.410587 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 03:27:10.432476 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 03:27:10.432000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:10.434151 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 03:27:10.434833 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 03:27:10.435708 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 03:27:10.437919 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 03:27:10.472018 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 03:27:10.471000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:11.297885 disk-uuid[747]: Warning: The kernel is still using the old partition table. Dec 16 03:27:11.297885 disk-uuid[747]: The new table will be used at the next reboot or after you Dec 16 03:27:11.297885 disk-uuid[747]: run partprobe(8) or kpartx(8) Dec 16 03:27:11.297885 disk-uuid[747]: The operation has completed successfully. Dec 16 03:27:11.310604 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 03:27:11.320256 kernel: kauditd_printk_skb: 16 callbacks suppressed Dec 16 03:27:11.320295 kernel: audit: type=1130 audit(1765855631.310:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:11.320349 kernel: audit: type=1131 audit(1765855631.310:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:11.310000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:11.310000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:11.310837 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 03:27:11.313294 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 03:27:11.354313 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (833) Dec 16 03:27:11.354403 kernel: BTRFS info (device vda6): first mount of filesystem 7e31dbd7-b976-4d4a-a2e9-e2baacf4ad38 Dec 16 03:27:11.355130 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 03:27:11.360914 kernel: BTRFS info (device vda6): turning on async discard Dec 16 03:27:11.361022 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 03:27:11.371180 kernel: BTRFS info (device vda6): last unmount of filesystem 7e31dbd7-b976-4d4a-a2e9-e2baacf4ad38 Dec 16 03:27:11.371184 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 03:27:11.376791 kernel: audit: type=1130 audit(1765855631.371:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:11.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:11.375100 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 03:27:11.588993 ignition[852]: Ignition 2.24.0 Dec 16 03:27:11.589007 ignition[852]: Stage: fetch-offline Dec 16 03:27:11.589087 ignition[852]: no configs at "/usr/lib/ignition/base.d" Dec 16 03:27:11.589103 ignition[852]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 16 03:27:11.589844 ignition[852]: parsed url from cmdline: "" Dec 16 03:27:11.589851 ignition[852]: no config URL provided Dec 16 03:27:11.589860 ignition[852]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 03:27:11.598621 kernel: audit: type=1130 audit(1765855631.592:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:11.592000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:11.592657 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 03:27:11.589883 ignition[852]: no config at "/usr/lib/ignition/user.ign" Dec 16 03:27:11.595989 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 16 03:27:11.589895 ignition[852]: failed to fetch config: resource requires networking Dec 16 03:27:11.591340 ignition[852]: Ignition finished successfully Dec 16 03:27:11.635774 ignition[858]: Ignition 2.24.0 Dec 16 03:27:11.635788 ignition[858]: Stage: fetch Dec 16 03:27:11.636018 ignition[858]: no configs at "/usr/lib/ignition/base.d" Dec 16 03:27:11.636028 ignition[858]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 16 03:27:11.636207 ignition[858]: parsed url from cmdline: "" Dec 16 03:27:11.636213 ignition[858]: no config URL provided Dec 16 03:27:11.636237 ignition[858]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 03:27:11.636253 ignition[858]: no config at "/usr/lib/ignition/user.ign" Dec 16 03:27:11.636312 ignition[858]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Dec 16 03:27:11.650817 ignition[858]: GET result: OK Dec 16 03:27:11.651058 ignition[858]: parsing config with SHA512: b39903bb3f4ac1cf21b6caf699d2e57aafc5ac370b7ea1f1dbc81090d7197dea223776398e60a737bc757c97f99ae204c08deaf2bfe3e8fdabaf4bc0c5c3705c Dec 16 03:27:11.663005 unknown[858]: fetched base config from "system" Dec 16 03:27:11.663607 ignition[858]: fetch: fetch complete Dec 16 03:27:11.663018 unknown[858]: fetched base config from "system" Dec 16 03:27:11.663616 ignition[858]: fetch: fetch passed Dec 16 03:27:11.663025 unknown[858]: fetched user config from "digitalocean" Dec 16 03:27:11.663692 ignition[858]: Ignition finished successfully Dec 16 03:27:11.667020 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 16 03:27:11.672126 kernel: audit: type=1130 audit(1765855631.666:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:11.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:11.669642 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 03:27:11.703063 ignition[864]: Ignition 2.24.0 Dec 16 03:27:11.703077 ignition[864]: Stage: kargs Dec 16 03:27:11.703313 ignition[864]: no configs at "/usr/lib/ignition/base.d" Dec 16 03:27:11.703329 ignition[864]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 16 03:27:11.704584 ignition[864]: kargs: kargs passed Dec 16 03:27:11.704656 ignition[864]: Ignition finished successfully Dec 16 03:27:11.706513 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 03:27:11.706000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:11.711986 kernel: audit: type=1130 audit(1765855631.706:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:11.713328 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 03:27:11.749856 ignition[870]: Ignition 2.24.0 Dec 16 03:27:11.749870 ignition[870]: Stage: disks Dec 16 03:27:11.750130 ignition[870]: no configs at "/usr/lib/ignition/base.d" Dec 16 03:27:11.750151 ignition[870]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 16 03:27:11.751408 ignition[870]: disks: disks passed Dec 16 03:27:11.751466 ignition[870]: Ignition finished successfully Dec 16 03:27:11.754216 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 03:27:11.755674 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 03:27:11.760175 kernel: audit: type=1130 audit(1765855631.754:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:11.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:11.759596 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 03:27:11.760584 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 03:27:11.761550 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 03:27:11.762437 systemd[1]: Reached target basic.target - Basic System. Dec 16 03:27:11.764861 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 03:27:11.810619 systemd-fsck[879]: ROOT: clean, 15/456736 files, 38230/456704 blocks Dec 16 03:27:11.814036 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 03:27:11.819333 kernel: audit: type=1130 audit(1765855631.813:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:11.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:11.816612 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 03:27:11.950797 kernel: EXT4-fs (vda9): mounted filesystem 1314c107-11a5-486b-9d52-be9f57b6bf1b r/w with ordered data mode. Quota mode: none. Dec 16 03:27:11.952358 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 03:27:11.954051 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 03:27:11.957005 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 03:27:11.959491 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 03:27:11.963293 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... Dec 16 03:27:11.970728 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 16 03:27:11.975100 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 03:27:11.975168 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 03:27:11.981371 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 03:27:11.984916 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 03:27:12.008119 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (887) Dec 16 03:27:12.010587 systemd-networkd[686]: eth1: Gained IPv6LL Dec 16 03:27:12.012805 kernel: BTRFS info (device vda6): first mount of filesystem 7e31dbd7-b976-4d4a-a2e9-e2baacf4ad38 Dec 16 03:27:12.015949 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 03:27:12.027200 kernel: BTRFS info (device vda6): turning on async discard Dec 16 03:27:12.027313 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 03:27:12.033663 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 03:27:12.101701 coreos-metadata[889]: Dec 16 03:27:12.099 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Dec 16 03:27:12.108425 coreos-metadata[890]: Dec 16 03:27:12.108 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Dec 16 03:27:12.114097 coreos-metadata[889]: Dec 16 03:27:12.114 INFO Fetch successful Dec 16 03:27:12.122783 coreos-metadata[890]: Dec 16 03:27:12.121 INFO Fetch successful Dec 16 03:27:12.122657 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. Dec 16 03:27:12.122879 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. Dec 16 03:27:12.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-afterburn-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:12.131241 coreos-metadata[890]: Dec 16 03:27:12.131 INFO wrote hostname ci-4547.0.0-8-fbad3a37dc to /sysroot/etc/hostname Dec 16 03:27:12.133644 kernel: audit: type=1130 audit(1765855632.124:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-afterburn-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:12.133681 kernel: audit: type=1131 audit(1765855632.124:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-afterburn-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:12.124000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-afterburn-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:12.131305 systemd-networkd[686]: eth0: Gained IPv6LL Dec 16 03:27:12.136000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:12.134255 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 16 03:27:12.236953 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 03:27:12.236000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:12.239068 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 03:27:12.240416 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 03:27:12.268796 kernel: BTRFS info (device vda6): last unmount of filesystem 7e31dbd7-b976-4d4a-a2e9-e2baacf4ad38 Dec 16 03:27:12.292105 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 03:27:12.292000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:12.317567 ignition[995]: INFO : Ignition 2.24.0 Dec 16 03:27:12.317567 ignition[995]: INFO : Stage: mount Dec 16 03:27:12.318910 ignition[995]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 03:27:12.318910 ignition[995]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 16 03:27:12.320171 ignition[995]: INFO : mount: mount passed Dec 16 03:27:12.320171 ignition[995]: INFO : Ignition finished successfully Dec 16 03:27:12.321635 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 03:27:12.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:12.324089 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 03:27:12.340778 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 03:27:12.346798 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 03:27:12.369802 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1005) Dec 16 03:27:12.374010 kernel: BTRFS info (device vda6): first mount of filesystem 7e31dbd7-b976-4d4a-a2e9-e2baacf4ad38 Dec 16 03:27:12.374103 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 03:27:12.379436 kernel: BTRFS info (device vda6): turning on async discard Dec 16 03:27:12.379548 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 03:27:12.382201 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 03:27:12.425112 ignition[1021]: INFO : Ignition 2.24.0 Dec 16 03:27:12.425112 ignition[1021]: INFO : Stage: files Dec 16 03:27:12.426615 ignition[1021]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 03:27:12.426615 ignition[1021]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 16 03:27:12.427706 ignition[1021]: DEBUG : files: compiled without relabeling support, skipping Dec 16 03:27:12.429733 ignition[1021]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 03:27:12.429733 ignition[1021]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 03:27:12.437098 ignition[1021]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 03:27:12.438028 ignition[1021]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 03:27:12.438793 ignition[1021]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 03:27:12.438274 unknown[1021]: wrote ssh authorized keys file for user: core Dec 16 03:27:12.440338 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 16 03:27:12.440338 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Dec 16 03:27:12.477174 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 16 03:27:12.527176 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 16 03:27:12.527176 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 16 03:27:12.528883 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 03:27:12.528883 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 16 03:27:12.530475 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 16 03:27:12.530475 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 03:27:12.530475 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 03:27:12.530475 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 03:27:12.530475 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 03:27:12.535037 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 03:27:12.535037 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 03:27:12.535037 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 16 03:27:12.537511 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 16 03:27:12.537511 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 16 03:27:12.537511 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Dec 16 03:27:13.097642 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 16 03:27:14.766788 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 16 03:27:14.766788 ignition[1021]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 16 03:27:14.769884 ignition[1021]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 03:27:14.771938 ignition[1021]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 03:27:14.773587 ignition[1021]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 16 03:27:14.773587 ignition[1021]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 16 03:27:14.773587 ignition[1021]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 16 03:27:14.773587 ignition[1021]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 03:27:14.773587 ignition[1021]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 03:27:14.773587 ignition[1021]: INFO : files: files passed Dec 16 03:27:14.773587 ignition[1021]: INFO : Ignition finished successfully Dec 16 03:27:14.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:14.775324 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 03:27:14.780987 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 03:27:14.784967 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 03:27:14.794115 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 03:27:14.794276 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 03:27:14.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:14.795000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:14.808564 initrd-setup-root-after-ignition[1054]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 03:27:14.808564 initrd-setup-root-after-ignition[1054]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 03:27:14.811673 initrd-setup-root-after-ignition[1058]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 03:27:14.815093 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 03:27:14.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:14.817201 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 03:27:14.819033 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 03:27:14.884863 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 03:27:14.885041 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 03:27:14.885000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:14.885000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:14.886369 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 03:27:14.886950 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 03:27:14.888250 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 03:27:14.889658 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 03:27:14.920381 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 03:27:14.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:14.922796 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 03:27:14.946227 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 16 03:27:14.946579 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 03:27:14.947997 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 03:27:14.949039 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 03:27:14.949969 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 03:27:14.950000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:14.950155 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 03:27:14.951153 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 03:27:14.951710 systemd[1]: Stopped target basic.target - Basic System. Dec 16 03:27:14.952706 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 03:27:14.953560 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 03:27:14.954503 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 03:27:14.955519 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 16 03:27:14.956480 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 03:27:14.957392 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 03:27:14.958382 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 03:27:14.959238 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 03:27:14.960485 systemd[1]: Stopped target swap.target - Swaps. Dec 16 03:27:14.961000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:14.961330 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 03:27:14.961487 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 03:27:14.962692 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 03:27:14.963689 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 03:27:14.968932 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 03:27:14.970000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:14.969153 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 03:27:14.969988 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 03:27:14.971000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:14.970264 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 03:27:14.972000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:14.971489 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 03:27:14.973000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:14.971685 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 03:27:14.972705 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 03:27:14.972823 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 03:27:14.973585 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 16 03:27:14.973726 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 16 03:27:14.975676 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 03:27:14.977529 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 03:27:14.979904 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 03:27:14.979000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:14.981650 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 03:27:14.982168 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 03:27:14.983000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:14.983898 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 03:27:14.985000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:14.984549 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 03:27:14.986000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:14.984740 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 03:27:14.986137 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 03:27:14.986249 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 03:27:14.994561 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 03:27:14.996817 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 03:27:14.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:14.997000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:15.012080 ignition[1078]: INFO : Ignition 2.24.0 Dec 16 03:27:15.012080 ignition[1078]: INFO : Stage: umount Dec 16 03:27:15.014840 ignition[1078]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 03:27:15.014840 ignition[1078]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 16 03:27:15.014840 ignition[1078]: INFO : umount: umount passed Dec 16 03:27:15.014840 ignition[1078]: INFO : Ignition finished successfully Dec 16 03:27:15.015000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:15.017000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:15.014318 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 03:27:15.015033 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 03:27:15.019000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:15.015157 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 03:27:15.021000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:15.016605 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 03:27:15.016722 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 03:27:15.023000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:15.018840 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 03:27:15.018902 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 03:27:15.020160 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 16 03:27:15.020215 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 16 03:27:15.022107 systemd[1]: Stopped target network.target - Network. Dec 16 03:27:15.023366 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 03:27:15.023435 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 03:27:15.024211 systemd[1]: Stopped target paths.target - Path Units. Dec 16 03:27:15.024981 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 03:27:15.028817 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 03:27:15.029383 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 03:27:15.030293 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 03:27:15.031227 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 03:27:15.031283 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 03:27:15.033000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:15.032172 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 03:27:15.034000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:15.032213 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 03:27:15.033017 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Dec 16 03:27:15.033046 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Dec 16 03:27:15.033916 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 03:27:15.040000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:15.033986 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 03:27:15.034800 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 03:27:15.042000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:15.034851 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 03:27:15.035830 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 03:27:15.036715 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 03:27:15.040022 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 03:27:15.040192 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 03:27:15.047000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:15.048000 audit: BPF prog-id=6 op=UNLOAD Dec 16 03:27:15.042185 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 03:27:15.042356 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 03:27:15.047033 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 03:27:15.050000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:15.047169 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 03:27:15.050246 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 03:27:15.050362 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 03:27:15.053652 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 16 03:27:15.054165 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 03:27:15.054000 audit: BPF prog-id=9 op=UNLOAD Dec 16 03:27:15.054211 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 03:27:15.056275 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 03:27:15.058035 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 03:27:15.058000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:15.058122 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 03:27:15.060868 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 03:27:15.060943 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 03:27:15.061000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:15.062158 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 03:27:15.062000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:15.062219 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 03:27:15.064881 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 03:27:15.083369 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 03:27:15.083546 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 03:27:15.083000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:15.084829 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 03:27:15.084877 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 03:27:15.086801 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 03:27:15.090000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:15.090000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:15.086866 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 03:27:15.089198 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 03:27:15.089306 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 03:27:15.091235 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 03:27:15.094000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:15.091313 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 03:27:15.091823 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 03:27:15.091871 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 03:27:15.102951 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 03:27:15.103474 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 16 03:27:15.104000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:15.103557 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 03:27:15.105828 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 03:27:15.107000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:15.105939 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 03:27:15.108113 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 03:27:15.108000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:15.108188 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 03:27:15.110480 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 03:27:15.119382 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 03:27:15.119000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:15.129385 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 03:27:15.129557 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 03:27:15.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:15.130000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:15.131199 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 03:27:15.133367 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 03:27:15.157888 systemd[1]: Switching root. Dec 16 03:27:15.190912 systemd-journald[296]: Journal stopped Dec 16 03:27:16.636472 systemd-journald[296]: Received SIGTERM from PID 1 (systemd). Dec 16 03:27:16.636567 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 03:27:16.636585 kernel: SELinux: policy capability open_perms=1 Dec 16 03:27:16.636601 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 03:27:16.636616 kernel: SELinux: policy capability always_check_network=0 Dec 16 03:27:16.636639 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 03:27:16.636676 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 03:27:16.636689 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 03:27:16.636702 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 03:27:16.636715 kernel: SELinux: policy capability userspace_initial_context=0 Dec 16 03:27:16.636729 systemd[1]: Successfully loaded SELinux policy in 73.306ms. Dec 16 03:27:16.636778 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.381ms. Dec 16 03:27:16.636794 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 03:27:16.636815 systemd[1]: Detected virtualization kvm. Dec 16 03:27:16.636829 systemd[1]: Detected architecture x86-64. Dec 16 03:27:16.636842 systemd[1]: Detected first boot. Dec 16 03:27:16.636856 systemd[1]: Hostname set to . Dec 16 03:27:16.636874 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 16 03:27:16.636887 zram_generator::config[1126]: No configuration found. Dec 16 03:27:16.636909 kernel: Guest personality initialized and is inactive Dec 16 03:27:16.636922 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Dec 16 03:27:16.636934 kernel: Initialized host personality Dec 16 03:27:16.636946 kernel: NET: Registered PF_VSOCK protocol family Dec 16 03:27:16.636960 systemd[1]: Populated /etc with preset unit settings. Dec 16 03:27:16.636979 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 03:27:16.636997 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 03:27:16.637017 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 03:27:16.637036 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 03:27:16.637051 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 03:27:16.637080 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 03:27:16.637096 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 03:27:16.637115 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 03:27:16.637129 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 03:27:16.637152 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 03:27:16.637166 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 03:27:16.637180 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 03:27:16.637194 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 03:27:16.637207 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 03:27:16.637222 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 03:27:16.637236 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 03:27:16.637256 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 03:27:16.637270 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 16 03:27:16.637283 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 03:27:16.637296 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 03:27:16.637310 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 03:27:16.637330 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 03:27:16.637344 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 03:27:16.637359 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 03:27:16.637372 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 03:27:16.637389 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 03:27:16.637403 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Dec 16 03:27:16.637417 systemd[1]: Reached target slices.target - Slice Units. Dec 16 03:27:16.637437 systemd[1]: Reached target swap.target - Swaps. Dec 16 03:27:16.637451 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 03:27:16.637465 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 03:27:16.637478 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 16 03:27:16.637492 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 16 03:27:16.637506 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Dec 16 03:27:16.637520 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 03:27:16.637540 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Dec 16 03:27:16.637554 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Dec 16 03:27:16.643846 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 03:27:16.643876 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 03:27:16.643890 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 03:27:16.643904 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 03:27:16.643919 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 03:27:16.643953 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 03:27:16.643968 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 03:27:16.643982 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 03:27:16.643996 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 03:27:16.644010 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 03:27:16.644025 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 03:27:16.644038 systemd[1]: Reached target machines.target - Containers. Dec 16 03:27:16.644060 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 03:27:16.644074 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 03:27:16.644088 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 03:27:16.644101 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 03:27:16.644115 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 03:27:16.644129 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 03:27:16.644144 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 03:27:16.644165 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 03:27:16.644178 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 03:27:16.644192 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 03:27:16.644207 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 03:27:16.644221 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 03:27:16.644236 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 03:27:16.644256 kernel: kauditd_printk_skb: 66 callbacks suppressed Dec 16 03:27:16.644272 kernel: audit: type=1131 audit(1765855636.382:103): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:16.644287 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 03:27:16.644301 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 03:27:16.644323 kernel: audit: type=1131 audit(1765855636.392:104): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:16.644337 kernel: audit: type=1334 audit(1765855636.400:105): prog-id=14 op=UNLOAD Dec 16 03:27:16.644349 kernel: audit: type=1334 audit(1765855636.400:106): prog-id=13 op=UNLOAD Dec 16 03:27:16.644362 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 03:27:16.644376 kernel: audit: type=1334 audit(1765855636.402:107): prog-id=15 op=LOAD Dec 16 03:27:16.644389 kernel: audit: type=1334 audit(1765855636.402:108): prog-id=16 op=LOAD Dec 16 03:27:16.644409 kernel: audit: type=1334 audit(1765855636.402:109): prog-id=17 op=LOAD Dec 16 03:27:16.644422 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 03:27:16.644436 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 03:27:16.644458 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 03:27:16.644482 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 16 03:27:16.644509 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 03:27:16.644523 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 03:27:16.644536 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 03:27:16.644551 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 03:27:16.644575 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 03:27:16.644590 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 03:27:16.644604 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 03:27:16.644619 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 03:27:16.644634 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 03:27:16.644647 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 03:27:16.644663 kernel: audit: type=1130 audit(1765855636.495:110): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:16.644685 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 03:27:16.644700 kernel: audit: type=1130 audit(1765855636.505:111): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:16.644712 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 03:27:16.644726 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 03:27:16.644739 kernel: audit: type=1131 audit(1765855636.508:112): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:16.644764 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 03:27:16.644786 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 03:27:16.644806 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 03:27:16.644828 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 03:27:16.644843 kernel: fuse: init (API version 7.41) Dec 16 03:27:16.644857 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 03:27:16.644870 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 03:27:16.644886 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 03:27:16.644900 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 03:27:16.644921 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 03:27:16.644936 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 03:27:16.644950 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Dec 16 03:27:16.644965 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 03:27:16.644980 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 03:27:16.644994 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 03:27:16.645009 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 03:27:16.645032 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 16 03:27:16.645046 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 03:27:16.645075 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 03:27:16.645132 systemd-journald[1195]: Collecting audit messages is enabled. Dec 16 03:27:16.645168 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 03:27:16.645183 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 03:27:16.645205 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 03:27:16.645220 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 03:27:16.645234 systemd-journald[1195]: Journal started Dec 16 03:27:16.645262 systemd-journald[1195]: Runtime Journal (/run/log/journal/ad16f4dd228947d0864fcb85ce1e0d54) is 4.8M, max 39.1M, 34.2M free. Dec 16 03:27:16.214000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 16 03:27:16.382000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:16.392000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:16.400000 audit: BPF prog-id=14 op=UNLOAD Dec 16 03:27:16.400000 audit: BPF prog-id=13 op=UNLOAD Dec 16 03:27:16.402000 audit: BPF prog-id=15 op=LOAD Dec 16 03:27:16.402000 audit: BPF prog-id=16 op=LOAD Dec 16 03:27:16.402000 audit: BPF prog-id=17 op=LOAD Dec 16 03:27:16.495000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:16.505000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:16.508000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:16.517000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:16.517000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:16.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:16.524000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:16.649779 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 03:27:16.530000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:16.530000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:16.534000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:16.543000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:16.543000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:16.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:16.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:16.632000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 16 03:27:16.632000 audit[1195]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffd15657190 a2=4000 a3=0 items=0 ppid=1 pid=1195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:16.632000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 16 03:27:16.119968 systemd[1]: Queued start job for default target multi-user.target. Dec 16 03:27:16.143911 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 16 03:27:16.144560 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 03:27:16.662194 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 03:27:16.662286 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 03:27:16.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:16.668076 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 03:27:16.668000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:16.669000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:16.669406 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 16 03:27:16.671474 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 03:27:16.672854 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 03:27:16.698185 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 03:27:16.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:16.703687 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 03:27:16.706947 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 03:27:16.712949 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 16 03:27:16.721122 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 03:27:16.747821 kernel: loop1: detected capacity change from 0 to 8 Dec 16 03:27:16.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:16.748060 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 03:27:16.760826 kernel: ACPI: bus type drm_connector registered Dec 16 03:27:16.762893 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 03:27:16.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:16.770627 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 03:27:16.772706 systemd-journald[1195]: Time spent on flushing to /var/log/journal/ad16f4dd228947d0864fcb85ce1e0d54 is 46.568ms for 1148 entries. Dec 16 03:27:16.772706 systemd-journald[1195]: System Journal (/var/log/journal/ad16f4dd228947d0864fcb85ce1e0d54) is 8M, max 163.5M, 155.5M free. Dec 16 03:27:16.830332 systemd-journald[1195]: Received client request to flush runtime journal. Dec 16 03:27:16.830392 kernel: loop2: detected capacity change from 0 to 50784 Dec 16 03:27:16.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:16.773000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:16.796000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:16.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:16.815000 audit: BPF prog-id=18 op=LOAD Dec 16 03:27:16.815000 audit: BPF prog-id=19 op=LOAD Dec 16 03:27:16.815000 audit: BPF prog-id=20 op=LOAD Dec 16 03:27:16.819000 audit: BPF prog-id=21 op=LOAD Dec 16 03:27:16.772941 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 03:27:16.796287 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 16 03:27:16.812727 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 03:27:16.817560 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Dec 16 03:27:16.822984 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 03:27:16.828600 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 03:27:16.834000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:16.835011 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 03:27:16.837000 audit: BPF prog-id=22 op=LOAD Dec 16 03:27:16.838000 audit: BPF prog-id=23 op=LOAD Dec 16 03:27:16.838000 audit: BPF prog-id=24 op=LOAD Dec 16 03:27:16.839871 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Dec 16 03:27:16.847783 kernel: loop3: detected capacity change from 0 to 219144 Dec 16 03:27:16.845000 audit: BPF prog-id=25 op=LOAD Dec 16 03:27:16.846000 audit: BPF prog-id=26 op=LOAD Dec 16 03:27:16.846000 audit: BPF prog-id=27 op=LOAD Dec 16 03:27:16.849342 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 03:27:16.880618 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. Dec 16 03:27:16.881046 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. Dec 16 03:27:16.887792 kernel: loop4: detected capacity change from 0 to 111560 Dec 16 03:27:16.891616 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 03:27:16.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:16.924668 systemd-nsresourced[1268]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Dec 16 03:27:16.929782 kernel: loop5: detected capacity change from 0 to 8 Dec 16 03:27:16.930666 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Dec 16 03:27:16.930000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:16.941824 kernel: loop6: detected capacity change from 0 to 50784 Dec 16 03:27:16.964823 kernel: loop7: detected capacity change from 0 to 219144 Dec 16 03:27:16.964656 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 03:27:16.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:16.991786 kernel: loop1: detected capacity change from 0 to 111560 Dec 16 03:27:17.015079 (sd-merge)[1278]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-digitalocean.raw'. Dec 16 03:27:17.032289 (sd-merge)[1278]: Merged extensions into '/usr'. Dec 16 03:27:17.041044 systemd[1]: Reload requested from client PID 1229 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 03:27:17.041080 systemd[1]: Reloading... Dec 16 03:27:17.132637 systemd-oomd[1263]: No swap; memory pressure usage will be degraded Dec 16 03:27:17.182838 zram_generator::config[1314]: No configuration found. Dec 16 03:27:17.201437 systemd-resolved[1264]: Positive Trust Anchors: Dec 16 03:27:17.201457 systemd-resolved[1264]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 03:27:17.201461 systemd-resolved[1264]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 16 03:27:17.201499 systemd-resolved[1264]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 03:27:17.228870 systemd-resolved[1264]: Using system hostname 'ci-4547.0.0-8-fbad3a37dc'. Dec 16 03:27:17.582744 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 03:27:17.583869 systemd[1]: Reloading finished in 542 ms. Dec 16 03:27:17.605663 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Dec 16 03:27:17.609000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:17.610686 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 03:27:17.610000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:17.612290 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 03:27:17.613000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:17.620070 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 03:27:17.627997 systemd[1]: Starting ensure-sysext.service... Dec 16 03:27:17.631165 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 03:27:17.633000 audit: BPF prog-id=28 op=LOAD Dec 16 03:27:17.633000 audit: BPF prog-id=25 op=UNLOAD Dec 16 03:27:17.633000 audit: BPF prog-id=29 op=LOAD Dec 16 03:27:17.633000 audit: BPF prog-id=30 op=LOAD Dec 16 03:27:17.633000 audit: BPF prog-id=26 op=UNLOAD Dec 16 03:27:17.633000 audit: BPF prog-id=27 op=UNLOAD Dec 16 03:27:17.634000 audit: BPF prog-id=31 op=LOAD Dec 16 03:27:17.634000 audit: BPF prog-id=21 op=UNLOAD Dec 16 03:27:17.637000 audit: BPF prog-id=32 op=LOAD Dec 16 03:27:17.637000 audit: BPF prog-id=18 op=UNLOAD Dec 16 03:27:17.637000 audit: BPF prog-id=33 op=LOAD Dec 16 03:27:17.637000 audit: BPF prog-id=34 op=LOAD Dec 16 03:27:17.637000 audit: BPF prog-id=19 op=UNLOAD Dec 16 03:27:17.637000 audit: BPF prog-id=20 op=UNLOAD Dec 16 03:27:17.639000 audit: BPF prog-id=35 op=LOAD Dec 16 03:27:17.639000 audit: BPF prog-id=22 op=UNLOAD Dec 16 03:27:17.639000 audit: BPF prog-id=36 op=LOAD Dec 16 03:27:17.639000 audit: BPF prog-id=37 op=LOAD Dec 16 03:27:17.639000 audit: BPF prog-id=23 op=UNLOAD Dec 16 03:27:17.639000 audit: BPF prog-id=24 op=UNLOAD Dec 16 03:27:17.640000 audit: BPF prog-id=38 op=LOAD Dec 16 03:27:17.640000 audit: BPF prog-id=15 op=UNLOAD Dec 16 03:27:17.640000 audit: BPF prog-id=39 op=LOAD Dec 16 03:27:17.640000 audit: BPF prog-id=40 op=LOAD Dec 16 03:27:17.640000 audit: BPF prog-id=16 op=UNLOAD Dec 16 03:27:17.640000 audit: BPF prog-id=17 op=UNLOAD Dec 16 03:27:17.650604 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 03:27:17.650000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:17.651000 audit: BPF prog-id=8 op=UNLOAD Dec 16 03:27:17.651000 audit: BPF prog-id=7 op=UNLOAD Dec 16 03:27:17.652000 audit: BPF prog-id=41 op=LOAD Dec 16 03:27:17.652000 audit: BPF prog-id=42 op=LOAD Dec 16 03:27:17.654102 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 03:27:17.670239 systemd[1]: Reload requested from client PID 1360 ('systemctl') (unit ensure-sysext.service)... Dec 16 03:27:17.670264 systemd[1]: Reloading... Dec 16 03:27:17.688684 systemd-tmpfiles[1361]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 16 03:27:17.689357 systemd-tmpfiles[1361]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 16 03:27:17.690041 systemd-tmpfiles[1361]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 03:27:17.692024 systemd-tmpfiles[1361]: ACLs are not supported, ignoring. Dec 16 03:27:17.692190 systemd-tmpfiles[1361]: ACLs are not supported, ignoring. Dec 16 03:27:17.700790 systemd-tmpfiles[1361]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 03:27:17.701021 systemd-tmpfiles[1361]: Skipping /boot Dec 16 03:27:17.716272 systemd-tmpfiles[1361]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 03:27:17.717956 systemd-tmpfiles[1361]: Skipping /boot Dec 16 03:27:17.755356 systemd-udevd[1363]: Using default interface naming scheme 'v257'. Dec 16 03:27:17.801299 zram_generator::config[1397]: No configuration found. Dec 16 03:27:18.031840 kernel: mousedev: PS/2 mouse device common for all mice Dec 16 03:27:18.042782 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 16 03:27:18.054780 kernel: ACPI: button: Power Button [PWRF] Dec 16 03:27:18.180502 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Dec 16 03:27:18.181004 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 16 03:27:18.260031 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 16 03:27:18.260518 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 16 03:27:18.261401 systemd[1]: Reloading finished in 590 ms. Dec 16 03:27:18.271000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:18.271951 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 03:27:18.278361 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 03:27:18.278000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:18.282000 audit: BPF prog-id=43 op=LOAD Dec 16 03:27:18.282000 audit: BPF prog-id=28 op=UNLOAD Dec 16 03:27:18.283000 audit: BPF prog-id=44 op=LOAD Dec 16 03:27:18.283000 audit: BPF prog-id=45 op=LOAD Dec 16 03:27:18.283000 audit: BPF prog-id=29 op=UNLOAD Dec 16 03:27:18.283000 audit: BPF prog-id=30 op=UNLOAD Dec 16 03:27:18.283000 audit: BPF prog-id=46 op=LOAD Dec 16 03:27:18.283000 audit: BPF prog-id=32 op=UNLOAD Dec 16 03:27:18.285000 audit: BPF prog-id=47 op=LOAD Dec 16 03:27:18.285000 audit: BPF prog-id=48 op=LOAD Dec 16 03:27:18.285000 audit: BPF prog-id=33 op=UNLOAD Dec 16 03:27:18.285000 audit: BPF prog-id=34 op=UNLOAD Dec 16 03:27:18.285000 audit: BPF prog-id=49 op=LOAD Dec 16 03:27:18.285000 audit: BPF prog-id=50 op=LOAD Dec 16 03:27:18.285000 audit: BPF prog-id=41 op=UNLOAD Dec 16 03:27:18.285000 audit: BPF prog-id=42 op=UNLOAD Dec 16 03:27:18.287000 audit: BPF prog-id=51 op=LOAD Dec 16 03:27:18.287000 audit: BPF prog-id=31 op=UNLOAD Dec 16 03:27:18.288000 audit: BPF prog-id=52 op=LOAD Dec 16 03:27:18.288000 audit: BPF prog-id=35 op=UNLOAD Dec 16 03:27:18.288000 audit: BPF prog-id=53 op=LOAD Dec 16 03:27:18.288000 audit: BPF prog-id=54 op=LOAD Dec 16 03:27:18.288000 audit: BPF prog-id=36 op=UNLOAD Dec 16 03:27:18.288000 audit: BPF prog-id=37 op=UNLOAD Dec 16 03:27:18.293000 audit: BPF prog-id=55 op=LOAD Dec 16 03:27:18.293000 audit: BPF prog-id=38 op=UNLOAD Dec 16 03:27:18.293000 audit: BPF prog-id=56 op=LOAD Dec 16 03:27:18.293000 audit: BPF prog-id=57 op=LOAD Dec 16 03:27:18.293000 audit: BPF prog-id=39 op=UNLOAD Dec 16 03:27:18.293000 audit: BPF prog-id=40 op=UNLOAD Dec 16 03:27:18.350046 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Dec 16 03:27:18.350638 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 03:27:18.353232 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 03:27:18.360302 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 03:27:18.361201 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 03:27:18.367348 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 03:27:18.370840 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 03:27:18.374145 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 03:27:18.375066 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 03:27:18.376040 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 03:27:18.384113 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 03:27:18.397498 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 03:27:18.398153 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 03:27:18.405294 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 03:27:18.408000 audit: BPF prog-id=58 op=LOAD Dec 16 03:27:18.410940 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 03:27:18.420442 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 03:27:18.426839 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 03:27:18.436962 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 03:27:18.437270 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 03:27:18.444491 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 03:27:18.445941 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 03:27:18.446444 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 03:27:18.446612 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 03:27:18.446833 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 03:27:18.511297 systemd[1]: Finished ensure-sysext.service. Dec 16 03:27:18.511000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:18.516000 audit: BPF prog-id=59 op=LOAD Dec 16 03:27:18.520000 audit[1497]: SYSTEM_BOOT pid=1497 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 16 03:27:18.523229 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 16 03:27:18.524130 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 03:27:18.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:18.524000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:18.524661 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 03:27:18.538000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:18.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:18.538285 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 03:27:18.541618 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 03:27:18.568813 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 03:27:18.571000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:18.571000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:18.570136 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 03:27:18.570385 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 03:27:18.573480 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 03:27:18.582930 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 03:27:18.588000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:18.588000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:18.587517 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 03:27:18.592507 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 03:27:18.597378 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 03:27:18.597631 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 03:27:18.597000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:18.597000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:18.624956 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 03:27:18.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:18.628379 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 03:27:18.633519 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 03:27:18.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:18.694000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 16 03:27:18.694000 audit[1531]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffea534b260 a2=420 a3=0 items=0 ppid=1481 pid=1531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:18.694000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 03:27:18.703855 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 03:27:18.704980 augenrules[1531]: No rules Dec 16 03:27:18.706329 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 03:27:18.751651 systemd-networkd[1494]: lo: Link UP Dec 16 03:27:18.754795 systemd-networkd[1494]: lo: Gained carrier Dec 16 03:27:18.761791 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 03:27:18.764147 systemd-networkd[1494]: eth1: Configuring with /run/systemd/network/10-7e:bb:66:6c:38:83.network. Dec 16 03:27:18.767545 systemd-networkd[1494]: eth0: Configuring with /run/systemd/network/10-3e:aa:c3:c2:1f:82.network. Dec 16 03:27:18.769358 systemd-networkd[1494]: eth1: Link UP Dec 16 03:27:18.769720 systemd-networkd[1494]: eth1: Gained carrier Dec 16 03:27:18.777191 systemd-networkd[1494]: eth0: Link UP Dec 16 03:27:18.778638 systemd-networkd[1494]: eth0: Gained carrier Dec 16 03:27:18.825807 kernel: ISO 9660 Extensions: RRIP_1991A Dec 16 03:27:18.848704 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Dec 16 03:27:18.848844 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Dec 16 03:27:18.902803 kernel: EDAC MC: Ver: 3.0.0 Dec 16 03:27:18.937795 kernel: Console: switching to colour dummy device 80x25 Dec 16 03:27:18.942842 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Dec 16 03:27:18.942935 kernel: [drm] features: -context_init Dec 16 03:27:18.942951 kernel: [drm] number of scanouts: 1 Dec 16 03:27:18.942996 kernel: [drm] number of cap sets: 0 Dec 16 03:27:18.943011 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0 Dec 16 03:27:18.950109 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 16 03:27:18.950471 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Dec 16 03:27:18.951911 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 03:27:18.958563 systemd[1]: Reached target network.target - Network. Dec 16 03:27:18.959254 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 03:27:18.961904 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 16 03:27:18.965002 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 03:27:18.969887 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Dec 16 03:27:18.969973 kernel: Console: switching to colour frame buffer device 128x48 Dec 16 03:27:18.980479 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 03:27:18.980684 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 03:27:18.985776 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Dec 16 03:27:18.993300 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 03:27:18.996699 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 03:27:19.018219 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 03:27:19.018480 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 03:27:19.022960 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 03:27:19.038790 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 16 03:27:19.072930 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 03:27:19.882059 systemd-resolved[1264]: Clock change detected. Flushing caches. Dec 16 03:27:19.882855 systemd-timesyncd[1505]: Contacted time server 207.58.172.126:123 (0.flatcar.pool.ntp.org). Dec 16 03:27:19.883415 systemd-timesyncd[1505]: Initial clock synchronization to Tue 2025-12-16 03:27:19.881972 UTC. Dec 16 03:27:19.971198 ldconfig[1486]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 03:27:19.974468 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 03:27:19.977377 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 03:27:20.004661 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 03:27:20.006889 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 03:27:20.007180 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 03:27:20.007283 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 03:27:20.007403 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Dec 16 03:27:20.007662 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 03:27:20.007824 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 03:27:20.007912 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Dec 16 03:27:20.008050 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Dec 16 03:27:20.008218 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 03:27:20.009851 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 03:27:20.009905 systemd[1]: Reached target paths.target - Path Units. Dec 16 03:27:20.010007 systemd[1]: Reached target timers.target - Timer Units. Dec 16 03:27:20.012860 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 03:27:20.016786 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 03:27:20.021276 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 16 03:27:20.022888 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 16 03:27:20.023349 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 16 03:27:20.026758 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 03:27:20.030385 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 16 03:27:20.032077 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 03:27:20.034770 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 03:27:20.035712 systemd[1]: Reached target basic.target - Basic System. Dec 16 03:27:20.036254 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 03:27:20.036283 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 03:27:20.037593 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 03:27:20.042330 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 16 03:27:20.045239 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 03:27:20.054413 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 03:27:20.061336 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 03:27:20.066347 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 03:27:20.068716 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 03:27:20.075521 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Dec 16 03:27:20.083444 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 03:27:20.089379 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 16 03:27:20.097610 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 03:27:20.106796 extend-filesystems[1566]: Found /dev/vda6 Dec 16 03:27:20.109700 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 03:27:20.129804 jq[1565]: false Dec 16 03:27:20.127582 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 03:27:20.129872 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 16 03:27:20.130715 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 03:27:20.131454 extend-filesystems[1566]: Found /dev/vda9 Dec 16 03:27:20.132865 oslogin_cache_refresh[1567]: Refreshing passwd entry cache Dec 16 03:27:20.133527 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 03:27:20.143645 google_oslogin_nss_cache[1567]: oslogin_cache_refresh[1567]: Refreshing passwd entry cache Dec 16 03:27:20.143645 google_oslogin_nss_cache[1567]: oslogin_cache_refresh[1567]: Failure getting users, quitting Dec 16 03:27:20.143645 google_oslogin_nss_cache[1567]: oslogin_cache_refresh[1567]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 03:27:20.143645 google_oslogin_nss_cache[1567]: oslogin_cache_refresh[1567]: Refreshing group entry cache Dec 16 03:27:20.143645 google_oslogin_nss_cache[1567]: oslogin_cache_refresh[1567]: Failure getting groups, quitting Dec 16 03:27:20.143645 google_oslogin_nss_cache[1567]: oslogin_cache_refresh[1567]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 03:27:20.140551 oslogin_cache_refresh[1567]: Failure getting users, quitting Dec 16 03:27:20.140572 oslogin_cache_refresh[1567]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 03:27:20.140638 oslogin_cache_refresh[1567]: Refreshing group entry cache Dec 16 03:27:20.141741 oslogin_cache_refresh[1567]: Failure getting groups, quitting Dec 16 03:27:20.141755 oslogin_cache_refresh[1567]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 03:27:20.145758 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 03:27:20.148500 extend-filesystems[1566]: Checking size of /dev/vda9 Dec 16 03:27:20.152692 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 03:27:20.156239 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 03:27:20.156632 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 03:27:20.157047 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Dec 16 03:27:20.157378 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Dec 16 03:27:20.185047 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 03:27:20.186510 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 03:27:20.219441 coreos-metadata[1560]: Dec 16 03:27:20.219 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Dec 16 03:27:20.223430 coreos-metadata[1560]: Dec 16 03:27:20.220 INFO Fetch successful Dec 16 03:27:20.223548 jq[1579]: true Dec 16 03:27:20.234509 extend-filesystems[1566]: Resized partition /dev/vda9 Dec 16 03:27:20.252415 extend-filesystems[1609]: resize2fs 1.47.3 (8-Jul-2025) Dec 16 03:27:20.271944 jq[1606]: true Dec 16 03:27:20.279388 update_engine[1577]: I20251216 03:27:20.277922 1577 main.cc:92] Flatcar Update Engine starting Dec 16 03:27:20.294314 dbus-daemon[1561]: [system] SELinux support is enabled Dec 16 03:27:20.294726 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 03:27:20.301370 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 14138363 blocks Dec 16 03:27:20.302884 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 03:27:20.303592 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 03:27:20.306692 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 03:27:20.306813 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Dec 16 03:27:20.306842 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 03:27:20.309649 update_engine[1577]: I20251216 03:27:20.309397 1577 update_check_scheduler.cc:74] Next update check in 7m0s Dec 16 03:27:20.312624 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 03:27:20.314468 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 03:27:20.321054 tar[1589]: linux-amd64/LICENSE Dec 16 03:27:20.322603 systemd[1]: Started update-engine.service - Update Engine. Dec 16 03:27:20.328379 tar[1589]: linux-amd64/helm Dec 16 03:27:20.332518 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 03:27:20.408763 systemd-logind[1576]: New seat seat0. Dec 16 03:27:20.412186 systemd-logind[1576]: Watching system buttons on /dev/input/event2 (Power Button) Dec 16 03:27:20.412215 systemd-logind[1576]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 16 03:27:20.413660 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 03:27:20.463799 kernel: EXT4-fs (vda9): resized filesystem to 14138363 Dec 16 03:27:20.460274 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 16 03:27:20.464394 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 03:27:20.472858 extend-filesystems[1609]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 16 03:27:20.472858 extend-filesystems[1609]: old_desc_blocks = 1, new_desc_blocks = 7 Dec 16 03:27:20.472858 extend-filesystems[1609]: The filesystem on /dev/vda9 is now 14138363 (4k) blocks long. Dec 16 03:27:20.504826 extend-filesystems[1566]: Resized filesystem in /dev/vda9 Dec 16 03:27:20.473847 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 03:27:20.474662 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 03:27:20.515872 bash[1640]: Updated "/home/core/.ssh/authorized_keys" Dec 16 03:27:20.523398 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 03:27:20.542286 systemd[1]: Starting sshkeys.service... Dec 16 03:27:20.605051 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 16 03:27:20.610764 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 16 03:27:20.722573 coreos-metadata[1646]: Dec 16 03:27:20.720 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Dec 16 03:27:20.733964 coreos-metadata[1646]: Dec 16 03:27:20.731 INFO Fetch successful Dec 16 03:27:20.748325 unknown[1646]: wrote ssh authorized keys file for user: core Dec 16 03:27:20.801522 update-ssh-keys[1652]: Updated "/home/core/.ssh/authorized_keys" Dec 16 03:27:20.803913 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 16 03:27:20.812176 systemd[1]: Finished sshkeys.service. Dec 16 03:27:20.888160 locksmithd[1619]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 03:27:20.891323 containerd[1596]: time="2025-12-16T03:27:20Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 16 03:27:20.893975 containerd[1596]: time="2025-12-16T03:27:20.893758023Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Dec 16 03:27:20.896251 systemd-networkd[1494]: eth0: Gained IPv6LL Dec 16 03:27:20.902628 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 03:27:20.906576 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 03:27:20.912493 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 03:27:20.919976 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 03:27:20.957839 containerd[1596]: time="2025-12-16T03:27:20.957782760Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.213µs" Dec 16 03:27:20.959154 containerd[1596]: time="2025-12-16T03:27:20.958713271Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 16 03:27:20.959154 containerd[1596]: time="2025-12-16T03:27:20.958785149Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 16 03:27:20.959154 containerd[1596]: time="2025-12-16T03:27:20.958799425Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 16 03:27:20.959154 containerd[1596]: time="2025-12-16T03:27:20.958972791Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 16 03:27:20.959154 containerd[1596]: time="2025-12-16T03:27:20.958987894Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 03:27:20.959154 containerd[1596]: time="2025-12-16T03:27:20.959059807Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 03:27:20.959154 containerd[1596]: time="2025-12-16T03:27:20.959070816Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 03:27:20.960011 containerd[1596]: time="2025-12-16T03:27:20.959980100Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 03:27:20.960190 containerd[1596]: time="2025-12-16T03:27:20.960165568Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 03:27:20.960473 containerd[1596]: time="2025-12-16T03:27:20.960455930Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 03:27:20.960547 containerd[1596]: time="2025-12-16T03:27:20.960533711Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 16 03:27:20.960831 containerd[1596]: time="2025-12-16T03:27:20.960804475Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 16 03:27:20.963108 containerd[1596]: time="2025-12-16T03:27:20.962756724Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 16 03:27:20.963108 containerd[1596]: time="2025-12-16T03:27:20.962940765Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 16 03:27:20.964146 containerd[1596]: time="2025-12-16T03:27:20.963907248Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 03:27:20.964146 containerd[1596]: time="2025-12-16T03:27:20.963952798Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 03:27:20.964146 containerd[1596]: time="2025-12-16T03:27:20.963962876Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 16 03:27:20.964146 containerd[1596]: time="2025-12-16T03:27:20.964025961Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 16 03:27:20.972077 containerd[1596]: time="2025-12-16T03:27:20.971577977Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 16 03:27:20.972077 containerd[1596]: time="2025-12-16T03:27:20.971779735Z" level=info msg="metadata content store policy set" policy=shared Dec 16 03:27:20.976130 containerd[1596]: time="2025-12-16T03:27:20.976020550Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 16 03:27:20.976281 containerd[1596]: time="2025-12-16T03:27:20.976266031Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 16 03:27:20.977552 containerd[1596]: time="2025-12-16T03:27:20.977336044Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 16 03:27:20.977552 containerd[1596]: time="2025-12-16T03:27:20.977369232Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 16 03:27:20.977552 containerd[1596]: time="2025-12-16T03:27:20.977397855Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 16 03:27:20.977552 containerd[1596]: time="2025-12-16T03:27:20.977411578Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 16 03:27:20.977552 containerd[1596]: time="2025-12-16T03:27:20.977423347Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 16 03:27:20.977552 containerd[1596]: time="2025-12-16T03:27:20.977433283Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 16 03:27:20.977552 containerd[1596]: time="2025-12-16T03:27:20.977444414Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 16 03:27:20.977552 containerd[1596]: time="2025-12-16T03:27:20.977459313Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 16 03:27:20.977552 containerd[1596]: time="2025-12-16T03:27:20.977480852Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 16 03:27:20.977552 containerd[1596]: time="2025-12-16T03:27:20.977508370Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 16 03:27:20.977552 containerd[1596]: time="2025-12-16T03:27:20.977522876Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 16 03:27:20.978560 containerd[1596]: time="2025-12-16T03:27:20.978484464Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 16 03:27:20.980269 containerd[1596]: time="2025-12-16T03:27:20.979000173Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 16 03:27:20.980269 containerd[1596]: time="2025-12-16T03:27:20.979318792Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 16 03:27:20.980269 containerd[1596]: time="2025-12-16T03:27:20.979336890Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 16 03:27:20.980269 containerd[1596]: time="2025-12-16T03:27:20.980023687Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 16 03:27:20.980269 containerd[1596]: time="2025-12-16T03:27:20.980045945Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 16 03:27:20.980269 containerd[1596]: time="2025-12-16T03:27:20.980192673Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 16 03:27:20.980269 containerd[1596]: time="2025-12-16T03:27:20.980219152Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 16 03:27:20.983332 containerd[1596]: time="2025-12-16T03:27:20.981245781Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 16 03:27:20.983332 containerd[1596]: time="2025-12-16T03:27:20.981294460Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 16 03:27:20.983332 containerd[1596]: time="2025-12-16T03:27:20.981322322Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 16 03:27:20.983332 containerd[1596]: time="2025-12-16T03:27:20.981341299Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 16 03:27:20.983332 containerd[1596]: time="2025-12-16T03:27:20.983176238Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 16 03:27:20.983332 containerd[1596]: time="2025-12-16T03:27:20.983275771Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 16 03:27:20.983332 containerd[1596]: time="2025-12-16T03:27:20.983293155Z" level=info msg="Start snapshots syncer" Dec 16 03:27:20.985171 containerd[1596]: time="2025-12-16T03:27:20.984469680Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 16 03:27:20.987402 containerd[1596]: time="2025-12-16T03:27:20.985906976Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 16 03:27:20.987402 containerd[1596]: time="2025-12-16T03:27:20.985975516Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 16 03:27:20.988203 containerd[1596]: time="2025-12-16T03:27:20.986041951Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 16 03:27:20.988203 containerd[1596]: time="2025-12-16T03:27:20.986234832Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 16 03:27:20.988203 containerd[1596]: time="2025-12-16T03:27:20.986268019Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 16 03:27:20.988203 containerd[1596]: time="2025-12-16T03:27:20.986281204Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 16 03:27:20.988203 containerd[1596]: time="2025-12-16T03:27:20.986294918Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 16 03:27:20.988203 containerd[1596]: time="2025-12-16T03:27:20.986311874Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 16 03:27:20.988203 containerd[1596]: time="2025-12-16T03:27:20.986325111Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 16 03:27:20.988203 containerd[1596]: time="2025-12-16T03:27:20.986340767Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 16 03:27:20.988203 containerd[1596]: time="2025-12-16T03:27:20.986354883Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 16 03:27:20.988203 containerd[1596]: time="2025-12-16T03:27:20.986397804Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 16 03:27:20.988203 containerd[1596]: time="2025-12-16T03:27:20.986440466Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 03:27:20.988203 containerd[1596]: time="2025-12-16T03:27:20.986462891Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 03:27:20.988203 containerd[1596]: time="2025-12-16T03:27:20.986476855Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 03:27:20.988470 containerd[1596]: time="2025-12-16T03:27:20.986491045Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 03:27:20.988470 containerd[1596]: time="2025-12-16T03:27:20.986500978Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 16 03:27:20.988470 containerd[1596]: time="2025-12-16T03:27:20.986515669Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 16 03:27:20.988470 containerd[1596]: time="2025-12-16T03:27:20.986530667Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 16 03:27:20.988470 containerd[1596]: time="2025-12-16T03:27:20.986548944Z" level=info msg="runtime interface created" Dec 16 03:27:20.993134 containerd[1596]: time="2025-12-16T03:27:20.986558942Z" level=info msg="created NRI interface" Dec 16 03:27:20.993134 containerd[1596]: time="2025-12-16T03:27:20.993030783Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 16 03:27:20.993134 containerd[1596]: time="2025-12-16T03:27:20.993120674Z" level=info msg="Connect containerd service" Dec 16 03:27:20.993304 containerd[1596]: time="2025-12-16T03:27:20.993169134Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 03:27:20.995873 containerd[1596]: time="2025-12-16T03:27:20.995616336Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 03:27:21.033418 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 03:27:21.208230 sshd_keygen[1597]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 03:27:21.255575 containerd[1596]: time="2025-12-16T03:27:21.255445867Z" level=info msg="Start subscribing containerd event" Dec 16 03:27:21.255761 containerd[1596]: time="2025-12-16T03:27:21.255732751Z" level=info msg="Start recovering state" Dec 16 03:27:21.258800 containerd[1596]: time="2025-12-16T03:27:21.256353818Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 03:27:21.258800 containerd[1596]: time="2025-12-16T03:27:21.257834455Z" level=info msg="Start event monitor" Dec 16 03:27:21.258800 containerd[1596]: time="2025-12-16T03:27:21.257858234Z" level=info msg="Start cni network conf syncer for default" Dec 16 03:27:21.258800 containerd[1596]: time="2025-12-16T03:27:21.257867293Z" level=info msg="Start streaming server" Dec 16 03:27:21.258800 containerd[1596]: time="2025-12-16T03:27:21.257876141Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 16 03:27:21.258800 containerd[1596]: time="2025-12-16T03:27:21.257886092Z" level=info msg="runtime interface starting up..." Dec 16 03:27:21.258800 containerd[1596]: time="2025-12-16T03:27:21.257892467Z" level=info msg="starting plugins..." Dec 16 03:27:21.258800 containerd[1596]: time="2025-12-16T03:27:21.257917091Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 16 03:27:21.258800 containerd[1596]: time="2025-12-16T03:27:21.257929179Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 03:27:21.258800 containerd[1596]: time="2025-12-16T03:27:21.258562684Z" level=info msg="containerd successfully booted in 0.368582s" Dec 16 03:27:21.258371 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 03:27:21.281887 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 03:27:21.287368 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 03:27:21.328196 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 03:27:21.328978 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 03:27:21.334489 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 03:27:21.344268 systemd-networkd[1494]: eth1: Gained IPv6LL Dec 16 03:27:21.370670 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 03:27:21.378547 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 03:27:21.381501 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 16 03:27:21.383572 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 03:27:21.387839 tar[1589]: linux-amd64/README.md Dec 16 03:27:21.420258 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 16 03:27:22.096060 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 03:27:22.100491 systemd[1]: Started sshd@0-144.126.212.19:22-147.75.109.163:56224.service - OpenSSH per-connection server daemon (147.75.109.163:56224). Dec 16 03:27:22.183119 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:27:22.184548 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 03:27:22.188077 systemd[1]: Startup finished in 3.044s (kernel) + 6.726s (initrd) + 6.125s (userspace) = 15.896s. Dec 16 03:27:22.194781 (kubelet)[1715]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 03:27:22.216283 sshd[1707]: Accepted publickey for core from 147.75.109.163 port 56224 ssh2: RSA SHA256:6kOFCGRVte/xmFQ810xFB2aQq7tPvT6sNyInYz0ISZM Dec 16 03:27:22.225309 sshd-session[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:27:22.250823 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 03:27:22.252812 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 03:27:22.261807 systemd-logind[1576]: New session 1 of user core. Dec 16 03:27:22.288948 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 03:27:22.292845 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 03:27:22.311830 (systemd)[1721]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:27:22.319806 systemd-logind[1576]: New session 2 of user core. Dec 16 03:27:22.471876 systemd[1721]: Queued start job for default target default.target. Dec 16 03:27:22.482969 systemd[1721]: Created slice app.slice - User Application Slice. Dec 16 03:27:22.483016 systemd[1721]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Dec 16 03:27:22.483031 systemd[1721]: Reached target paths.target - Paths. Dec 16 03:27:22.483107 systemd[1721]: Reached target timers.target - Timers. Dec 16 03:27:22.486221 systemd[1721]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 03:27:22.487663 systemd[1721]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Dec 16 03:27:22.506552 systemd[1721]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 03:27:22.506676 systemd[1721]: Reached target sockets.target - Sockets. Dec 16 03:27:22.530715 systemd[1721]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Dec 16 03:27:22.530877 systemd[1721]: Reached target basic.target - Basic System. Dec 16 03:27:22.530946 systemd[1721]: Reached target default.target - Main User Target. Dec 16 03:27:22.530980 systemd[1721]: Startup finished in 202ms. Dec 16 03:27:22.531534 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 03:27:22.538169 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 03:27:22.593893 systemd[1]: Started sshd@1-144.126.212.19:22-147.75.109.163:36128.service - OpenSSH per-connection server daemon (147.75.109.163:36128). Dec 16 03:27:22.722660 sshd[1741]: Accepted publickey for core from 147.75.109.163 port 36128 ssh2: RSA SHA256:6kOFCGRVte/xmFQ810xFB2aQq7tPvT6sNyInYz0ISZM Dec 16 03:27:22.723680 sshd-session[1741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:27:22.732255 systemd-logind[1576]: New session 3 of user core. Dec 16 03:27:22.738350 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 16 03:27:22.772118 sshd[1745]: Connection closed by 147.75.109.163 port 36128 Dec 16 03:27:22.771163 sshd-session[1741]: pam_unix(sshd:session): session closed for user core Dec 16 03:27:22.780351 systemd[1]: sshd@1-144.126.212.19:22-147.75.109.163:36128.service: Deactivated successfully. Dec 16 03:27:22.783749 systemd[1]: session-3.scope: Deactivated successfully. Dec 16 03:27:22.785159 systemd-logind[1576]: Session 3 logged out. Waiting for processes to exit. Dec 16 03:27:22.792354 systemd[1]: Started sshd@2-144.126.212.19:22-147.75.109.163:36132.service - OpenSSH per-connection server daemon (147.75.109.163:36132). Dec 16 03:27:22.794148 systemd-logind[1576]: Removed session 3. Dec 16 03:27:22.860870 kubelet[1715]: E1216 03:27:22.860818 1715 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 03:27:22.864064 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 03:27:22.864385 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 03:27:22.865339 systemd[1]: kubelet.service: Consumed 1.144s CPU time, 257.8M memory peak. Dec 16 03:27:22.883504 sshd[1752]: Accepted publickey for core from 147.75.109.163 port 36132 ssh2: RSA SHA256:6kOFCGRVte/xmFQ810xFB2aQq7tPvT6sNyInYz0ISZM Dec 16 03:27:22.885239 sshd-session[1752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:27:22.893147 systemd-logind[1576]: New session 4 of user core. Dec 16 03:27:22.902469 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 03:27:22.919930 sshd[1757]: Connection closed by 147.75.109.163 port 36132 Dec 16 03:27:22.919393 sshd-session[1752]: pam_unix(sshd:session): session closed for user core Dec 16 03:27:22.936014 systemd[1]: sshd@2-144.126.212.19:22-147.75.109.163:36132.service: Deactivated successfully. Dec 16 03:27:22.938717 systemd[1]: session-4.scope: Deactivated successfully. Dec 16 03:27:22.940167 systemd-logind[1576]: Session 4 logged out. Waiting for processes to exit. Dec 16 03:27:22.944701 systemd[1]: Started sshd@3-144.126.212.19:22-147.75.109.163:36148.service - OpenSSH per-connection server daemon (147.75.109.163:36148). Dec 16 03:27:22.945939 systemd-logind[1576]: Removed session 4. Dec 16 03:27:23.021943 sshd[1763]: Accepted publickey for core from 147.75.109.163 port 36148 ssh2: RSA SHA256:6kOFCGRVte/xmFQ810xFB2aQq7tPvT6sNyInYz0ISZM Dec 16 03:27:23.023733 sshd-session[1763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:27:23.031667 systemd-logind[1576]: New session 5 of user core. Dec 16 03:27:23.042485 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 03:27:23.070320 sshd[1767]: Connection closed by 147.75.109.163 port 36148 Dec 16 03:27:23.071033 sshd-session[1763]: pam_unix(sshd:session): session closed for user core Dec 16 03:27:23.090350 systemd[1]: sshd@3-144.126.212.19:22-147.75.109.163:36148.service: Deactivated successfully. Dec 16 03:27:23.093332 systemd[1]: session-5.scope: Deactivated successfully. Dec 16 03:27:23.096041 systemd-logind[1576]: Session 5 logged out. Waiting for processes to exit. Dec 16 03:27:23.099564 systemd[1]: Started sshd@4-144.126.212.19:22-147.75.109.163:36158.service - OpenSSH per-connection server daemon (147.75.109.163:36158). Dec 16 03:27:23.101161 systemd-logind[1576]: Removed session 5. Dec 16 03:27:23.181819 sshd[1773]: Accepted publickey for core from 147.75.109.163 port 36158 ssh2: RSA SHA256:6kOFCGRVte/xmFQ810xFB2aQq7tPvT6sNyInYz0ISZM Dec 16 03:27:23.184022 sshd-session[1773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:27:23.191966 systemd-logind[1576]: New session 6 of user core. Dec 16 03:27:23.202469 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 03:27:23.237618 sudo[1778]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 16 03:27:23.238500 sudo[1778]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 03:27:23.253146 sudo[1778]: pam_unix(sudo:session): session closed for user root Dec 16 03:27:23.256812 sshd[1777]: Connection closed by 147.75.109.163 port 36158 Dec 16 03:27:23.257792 sshd-session[1773]: pam_unix(sshd:session): session closed for user core Dec 16 03:27:23.272725 systemd[1]: sshd@4-144.126.212.19:22-147.75.109.163:36158.service: Deactivated successfully. Dec 16 03:27:23.275892 systemd[1]: session-6.scope: Deactivated successfully. Dec 16 03:27:23.277070 systemd-logind[1576]: Session 6 logged out. Waiting for processes to exit. Dec 16 03:27:23.282015 systemd[1]: Started sshd@5-144.126.212.19:22-147.75.109.163:36160.service - OpenSSH per-connection server daemon (147.75.109.163:36160). Dec 16 03:27:23.283637 systemd-logind[1576]: Removed session 6. Dec 16 03:27:23.353026 sshd[1785]: Accepted publickey for core from 147.75.109.163 port 36160 ssh2: RSA SHA256:6kOFCGRVte/xmFQ810xFB2aQq7tPvT6sNyInYz0ISZM Dec 16 03:27:23.354687 sshd-session[1785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:27:23.361251 systemd-logind[1576]: New session 7 of user core. Dec 16 03:27:23.373466 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 03:27:23.395484 sudo[1791]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 16 03:27:23.395895 sudo[1791]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 03:27:23.399430 sudo[1791]: pam_unix(sudo:session): session closed for user root Dec 16 03:27:23.408929 sudo[1790]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 16 03:27:23.409726 sudo[1790]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 03:27:23.420021 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 03:27:23.472438 kernel: kauditd_printk_skb: 123 callbacks suppressed Dec 16 03:27:23.472570 kernel: audit: type=1305 audit(1765855643.470:232): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 16 03:27:23.470000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 16 03:27:23.470000 audit[1815]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe2d7b2770 a2=420 a3=0 items=0 ppid=1796 pid=1815 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:23.473727 augenrules[1815]: No rules Dec 16 03:27:23.476996 kernel: audit: type=1300 audit(1765855643.470:232): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe2d7b2770 a2=420 a3=0 items=0 ppid=1796 pid=1815 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:23.477566 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 03:27:23.470000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 03:27:23.477895 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 03:27:23.480217 kernel: audit: type=1327 audit(1765855643.470:232): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 03:27:23.476000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:23.480563 sudo[1790]: pam_unix(sudo:session): session closed for user root Dec 16 03:27:23.484143 kernel: audit: type=1130 audit(1765855643.476:233): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:23.484226 kernel: audit: type=1131 audit(1765855643.476:234): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:23.476000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:23.486583 sshd[1789]: Connection closed by 147.75.109.163 port 36160 Dec 16 03:27:23.487257 sshd-session[1785]: pam_unix(sshd:session): session closed for user core Dec 16 03:27:23.479000 audit[1790]: USER_END pid=1790 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:27:23.479000 audit[1790]: CRED_DISP pid=1790 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:27:23.493315 kernel: audit: type=1106 audit(1765855643.479:235): pid=1790 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:27:23.493384 kernel: audit: type=1104 audit(1765855643.479:236): pid=1790 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:27:23.492000 audit[1785]: USER_END pid=1785 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:23.492000 audit[1785]: CRED_DISP pid=1785 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:23.500371 kernel: audit: type=1106 audit(1765855643.492:237): pid=1785 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:23.500494 kernel: audit: type=1104 audit(1765855643.492:238): pid=1785 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:23.505254 systemd[1]: sshd@5-144.126.212.19:22-147.75.109.163:36160.service: Deactivated successfully. Dec 16 03:27:23.504000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-144.126.212.19:22-147.75.109.163:36160 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:23.508120 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 03:27:23.509121 kernel: audit: type=1131 audit(1765855643.504:239): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-144.126.212.19:22-147.75.109.163:36160 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:23.510230 systemd-logind[1576]: Session 7 logged out. Waiting for processes to exit. Dec 16 03:27:23.512000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-144.126.212.19:22-147.75.109.163:36172 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:23.513682 systemd[1]: Started sshd@6-144.126.212.19:22-147.75.109.163:36172.service - OpenSSH per-connection server daemon (147.75.109.163:36172). Dec 16 03:27:23.516041 systemd-logind[1576]: Removed session 7. Dec 16 03:27:23.598000 audit[1824]: USER_ACCT pid=1824 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:23.601283 sshd[1824]: Accepted publickey for core from 147.75.109.163 port 36172 ssh2: RSA SHA256:6kOFCGRVte/xmFQ810xFB2aQq7tPvT6sNyInYz0ISZM Dec 16 03:27:23.600000 audit[1824]: CRED_ACQ pid=1824 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:23.600000 audit[1824]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffb5859330 a2=3 a3=0 items=0 ppid=1 pid=1824 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:23.600000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:27:23.602290 sshd-session[1824]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:27:23.609180 systemd-logind[1576]: New session 8 of user core. Dec 16 03:27:23.625504 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 16 03:27:23.628000 audit[1824]: USER_START pid=1824 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:23.631000 audit[1828]: CRED_ACQ pid=1828 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:23.646000 audit[1829]: USER_ACCT pid=1829 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:27:23.646000 audit[1829]: CRED_REFR pid=1829 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:27:23.646000 audit[1829]: USER_START pid=1829 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:27:23.647708 sudo[1829]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 03:27:23.648040 sudo[1829]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 03:27:24.216441 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 16 03:27:24.245835 (dockerd)[1848]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 16 03:27:24.729984 dockerd[1848]: time="2025-12-16T03:27:24.729889103Z" level=info msg="Starting up" Dec 16 03:27:24.731867 dockerd[1848]: time="2025-12-16T03:27:24.731812610Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 16 03:27:24.747642 dockerd[1848]: time="2025-12-16T03:27:24.747501750Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 16 03:27:24.886468 dockerd[1848]: time="2025-12-16T03:27:24.886346600Z" level=info msg="Loading containers: start." Dec 16 03:27:24.900348 kernel: Initializing XFRM netlink socket Dec 16 03:27:24.985000 audit[1897]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1897 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:27:24.985000 audit[1897]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffe4ce7a9d0 a2=0 a3=0 items=0 ppid=1848 pid=1897 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:24.985000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 16 03:27:24.988000 audit[1899]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1899 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:27:24.988000 audit[1899]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fff4b0280f0 a2=0 a3=0 items=0 ppid=1848 pid=1899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:24.988000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 16 03:27:24.990000 audit[1901]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1901 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:27:24.990000 audit[1901]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcee6d9dd0 a2=0 a3=0 items=0 ppid=1848 pid=1901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:24.990000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 16 03:27:24.993000 audit[1903]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1903 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:27:24.993000 audit[1903]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcc5d960b0 a2=0 a3=0 items=0 ppid=1848 pid=1903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:24.993000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 16 03:27:24.996000 audit[1905]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1905 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:27:24.996000 audit[1905]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc30e10930 a2=0 a3=0 items=0 ppid=1848 pid=1905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:24.996000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 16 03:27:24.998000 audit[1907]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1907 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:27:24.998000 audit[1907]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff9fc7bc20 a2=0 a3=0 items=0 ppid=1848 pid=1907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:24.998000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 03:27:25.001000 audit[1909]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1909 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:27:25.001000 audit[1909]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff3ab3eb40 a2=0 a3=0 items=0 ppid=1848 pid=1909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:25.001000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 03:27:25.005000 audit[1911]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1911 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:27:25.005000 audit[1911]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffca744e740 a2=0 a3=0 items=0 ppid=1848 pid=1911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:25.005000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 16 03:27:25.044000 audit[1914]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1914 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:27:25.044000 audit[1914]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7fff9af9d7e0 a2=0 a3=0 items=0 ppid=1848 pid=1914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:25.044000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Dec 16 03:27:25.048000 audit[1916]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1916 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:27:25.048000 audit[1916]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffe9ec42bd0 a2=0 a3=0 items=0 ppid=1848 pid=1916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:25.048000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 16 03:27:25.051000 audit[1918]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1918 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:27:25.051000 audit[1918]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffe4efaaec0 a2=0 a3=0 items=0 ppid=1848 pid=1918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:25.051000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 16 03:27:25.054000 audit[1920]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1920 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:27:25.054000 audit[1920]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffc8bd015d0 a2=0 a3=0 items=0 ppid=1848 pid=1920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:25.054000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 03:27:25.058000 audit[1922]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1922 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:27:25.058000 audit[1922]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffce1596240 a2=0 a3=0 items=0 ppid=1848 pid=1922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:25.058000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 16 03:27:25.109000 audit[1952]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=1952 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:27:25.109000 audit[1952]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fffb43e5c20 a2=0 a3=0 items=0 ppid=1848 pid=1952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:25.109000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 16 03:27:25.112000 audit[1954]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=1954 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:27:25.112000 audit[1954]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffed4e5d740 a2=0 a3=0 items=0 ppid=1848 pid=1954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:25.112000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 16 03:27:25.114000 audit[1956]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=1956 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:27:25.114000 audit[1956]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe6b9795a0 a2=0 a3=0 items=0 ppid=1848 pid=1956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:25.114000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 16 03:27:25.117000 audit[1958]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=1958 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:27:25.117000 audit[1958]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffed501f440 a2=0 a3=0 items=0 ppid=1848 pid=1958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:25.117000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 16 03:27:25.119000 audit[1960]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=1960 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:27:25.119000 audit[1960]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffeb7ce5e90 a2=0 a3=0 items=0 ppid=1848 pid=1960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:25.119000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 16 03:27:25.121000 audit[1962]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=1962 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:27:25.121000 audit[1962]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffeae99c1d0 a2=0 a3=0 items=0 ppid=1848 pid=1962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:25.121000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 03:27:25.124000 audit[1964]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=1964 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:27:25.124000 audit[1964]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd046fa510 a2=0 a3=0 items=0 ppid=1848 pid=1964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:25.124000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 03:27:25.127000 audit[1966]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=1966 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:27:25.127000 audit[1966]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffde48d0cf0 a2=0 a3=0 items=0 ppid=1848 pid=1966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:25.127000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 16 03:27:25.130000 audit[1968]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=1968 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:27:25.130000 audit[1968]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffec5f7b2f0 a2=0 a3=0 items=0 ppid=1848 pid=1968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:25.130000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Dec 16 03:27:25.132000 audit[1970]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=1970 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:27:25.132000 audit[1970]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffc5ce89430 a2=0 a3=0 items=0 ppid=1848 pid=1970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:25.132000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 16 03:27:25.135000 audit[1972]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=1972 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:27:25.135000 audit[1972]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffe10dc7840 a2=0 a3=0 items=0 ppid=1848 pid=1972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:25.135000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 16 03:27:25.137000 audit[1974]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=1974 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:27:25.137000 audit[1974]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffdf06b1950 a2=0 a3=0 items=0 ppid=1848 pid=1974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:25.137000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 03:27:25.140000 audit[1976]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=1976 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:27:25.140000 audit[1976]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7fff24e0e500 a2=0 a3=0 items=0 ppid=1848 pid=1976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:25.140000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 16 03:27:25.146000 audit[1981]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=1981 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:27:25.146000 audit[1981]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc01c83270 a2=0 a3=0 items=0 ppid=1848 pid=1981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:25.146000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 16 03:27:25.149000 audit[1983]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=1983 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:27:25.149000 audit[1983]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7fffff071520 a2=0 a3=0 items=0 ppid=1848 pid=1983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:25.149000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 16 03:27:25.151000 audit[1985]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1985 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:27:25.151000 audit[1985]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffe3f261680 a2=0 a3=0 items=0 ppid=1848 pid=1985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:25.151000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 16 03:27:25.154000 audit[1987]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=1987 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:27:25.154000 audit[1987]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffdba2fffc0 a2=0 a3=0 items=0 ppid=1848 pid=1987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:25.154000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 16 03:27:25.157000 audit[1989]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=1989 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:27:25.157000 audit[1989]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffd103bd4b0 a2=0 a3=0 items=0 ppid=1848 pid=1989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:25.157000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 16 03:27:25.160000 audit[1991]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=1991 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:27:25.160000 audit[1991]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffd27f45e20 a2=0 a3=0 items=0 ppid=1848 pid=1991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:25.160000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 16 03:27:25.194000 audit[1998]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=1998 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:27:25.194000 audit[1998]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7fffe68aeb70 a2=0 a3=0 items=0 ppid=1848 pid=1998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:25.194000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Dec 16 03:27:25.199000 audit[2000]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2000 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:27:25.199000 audit[2000]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffc5c3b6520 a2=0 a3=0 items=0 ppid=1848 pid=2000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:25.199000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Dec 16 03:27:25.214000 audit[2008]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2008 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:27:25.214000 audit[2008]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffec3f15a40 a2=0 a3=0 items=0 ppid=1848 pid=2008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:25.214000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Dec 16 03:27:25.228000 audit[2014]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2014 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:27:25.228000 audit[2014]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffd0e8d9a70 a2=0 a3=0 items=0 ppid=1848 pid=2014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:25.228000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Dec 16 03:27:25.232000 audit[2016]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2016 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:27:25.232000 audit[2016]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7fff93363920 a2=0 a3=0 items=0 ppid=1848 pid=2016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:25.232000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Dec 16 03:27:25.236000 audit[2018]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2018 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:27:25.236000 audit[2018]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffee7b3a7c0 a2=0 a3=0 items=0 ppid=1848 pid=2018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:25.236000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Dec 16 03:27:25.241000 audit[2020]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2020 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:27:25.241000 audit[2020]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7fff6e195940 a2=0 a3=0 items=0 ppid=1848 pid=2020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:25.241000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 03:27:25.245000 audit[2022]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2022 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:27:25.245000 audit[2022]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffcb3b99bc0 a2=0 a3=0 items=0 ppid=1848 pid=2022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:25.245000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Dec 16 03:27:25.247856 systemd-networkd[1494]: docker0: Link UP Dec 16 03:27:25.251434 dockerd[1848]: time="2025-12-16T03:27:25.251361559Z" level=info msg="Loading containers: done." Dec 16 03:27:25.273418 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2526884258-merged.mount: Deactivated successfully. Dec 16 03:27:25.275767 dockerd[1848]: time="2025-12-16T03:27:25.274753626Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 16 03:27:25.275767 dockerd[1848]: time="2025-12-16T03:27:25.274867881Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 16 03:27:25.275767 dockerd[1848]: time="2025-12-16T03:27:25.275003086Z" level=info msg="Initializing buildkit" Dec 16 03:27:25.302927 dockerd[1848]: time="2025-12-16T03:27:25.302848514Z" level=info msg="Completed buildkit initialization" Dec 16 03:27:25.315582 dockerd[1848]: time="2025-12-16T03:27:25.315507227Z" level=info msg="Daemon has completed initialization" Dec 16 03:27:25.315920 dockerd[1848]: time="2025-12-16T03:27:25.315883927Z" level=info msg="API listen on /run/docker.sock" Dec 16 03:27:25.316264 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 16 03:27:25.315000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:26.060536 containerd[1596]: time="2025-12-16T03:27:26.060081665Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Dec 16 03:27:26.733837 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount934516215.mount: Deactivated successfully. Dec 16 03:27:27.863817 containerd[1596]: time="2025-12-16T03:27:27.863743397Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:27:27.865205 containerd[1596]: time="2025-12-16T03:27:27.865155597Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=25399650" Dec 16 03:27:27.866117 containerd[1596]: time="2025-12-16T03:27:27.865781229Z" level=info msg="ImageCreate event name:\"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:27:27.869147 containerd[1596]: time="2025-12-16T03:27:27.868487659Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:27:27.870053 containerd[1596]: time="2025-12-16T03:27:27.869821153Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"27064672\" in 1.809667362s" Dec 16 03:27:27.870169 containerd[1596]: time="2025-12-16T03:27:27.870057390Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\"" Dec 16 03:27:27.870686 containerd[1596]: time="2025-12-16T03:27:27.870653768Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Dec 16 03:27:29.422928 containerd[1596]: time="2025-12-16T03:27:29.422852720Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:27:29.424363 containerd[1596]: time="2025-12-16T03:27:29.424303494Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=21154285" Dec 16 03:27:29.425259 containerd[1596]: time="2025-12-16T03:27:29.425210770Z" level=info msg="ImageCreate event name:\"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:27:29.431355 containerd[1596]: time="2025-12-16T03:27:29.431282508Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:27:29.433932 containerd[1596]: time="2025-12-16T03:27:29.433361410Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"22819474\" in 1.562663788s" Dec 16 03:27:29.434309 containerd[1596]: time="2025-12-16T03:27:29.434270714Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\"" Dec 16 03:27:29.435940 containerd[1596]: time="2025-12-16T03:27:29.435900165Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Dec 16 03:27:30.644039 containerd[1596]: time="2025-12-16T03:27:30.643931121Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:27:30.646005 containerd[1596]: time="2025-12-16T03:27:30.645929522Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=0" Dec 16 03:27:30.647372 containerd[1596]: time="2025-12-16T03:27:30.647283503Z" level=info msg="ImageCreate event name:\"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:27:30.651519 containerd[1596]: time="2025-12-16T03:27:30.651402141Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:27:30.652822 containerd[1596]: time="2025-12-16T03:27:30.652430334Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"17382979\" in 1.216273628s" Dec 16 03:27:30.652822 containerd[1596]: time="2025-12-16T03:27:30.652485023Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\"" Dec 16 03:27:30.653814 containerd[1596]: time="2025-12-16T03:27:30.653771320Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Dec 16 03:27:31.777276 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3947259750.mount: Deactivated successfully. Dec 16 03:27:32.131139 containerd[1596]: time="2025-12-16T03:27:32.131047645Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:27:32.133587 containerd[1596]: time="2025-12-16T03:27:32.133532631Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=0" Dec 16 03:27:32.134842 containerd[1596]: time="2025-12-16T03:27:32.134618326Z" level=info msg="ImageCreate event name:\"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:27:32.136229 containerd[1596]: time="2025-12-16T03:27:32.136195732Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:27:32.137137 containerd[1596]: time="2025-12-16T03:27:32.137099441Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"25964312\" in 1.483271439s" Dec 16 03:27:32.137264 containerd[1596]: time="2025-12-16T03:27:32.137246573Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\"" Dec 16 03:27:32.138360 containerd[1596]: time="2025-12-16T03:27:32.138332504Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Dec 16 03:27:32.248437 systemd-resolved[1264]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Dec 16 03:27:32.936945 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 03:27:32.945563 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 03:27:32.961855 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3807989791.mount: Deactivated successfully. Dec 16 03:27:33.151000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:33.152484 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:27:33.153685 kernel: kauditd_printk_skb: 132 callbacks suppressed Dec 16 03:27:33.153734 kernel: audit: type=1130 audit(1765855653.151:290): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:33.174788 (kubelet)[2163]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 03:27:33.249635 kubelet[2163]: E1216 03:27:33.249476 2163 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 03:27:33.254846 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 03:27:33.255001 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 03:27:33.254000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 03:27:33.255894 systemd[1]: kubelet.service: Consumed 220ms CPU time, 110.4M memory peak. Dec 16 03:27:33.260176 kernel: audit: type=1131 audit(1765855653.254:291): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 03:27:34.030772 containerd[1596]: time="2025-12-16T03:27:34.030699432Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:27:34.032025 containerd[1596]: time="2025-12-16T03:27:34.031753052Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=21820306" Dec 16 03:27:34.032807 containerd[1596]: time="2025-12-16T03:27:34.032757602Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:27:34.036624 containerd[1596]: time="2025-12-16T03:27:34.036558472Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:27:34.038052 containerd[1596]: time="2025-12-16T03:27:34.037790172Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.899426955s" Dec 16 03:27:34.038052 containerd[1596]: time="2025-12-16T03:27:34.037831937Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Dec 16 03:27:34.038938 containerd[1596]: time="2025-12-16T03:27:34.038880322Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Dec 16 03:27:34.612459 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount642349249.mount: Deactivated successfully. Dec 16 03:27:34.616881 containerd[1596]: time="2025-12-16T03:27:34.616809680Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:27:34.618196 containerd[1596]: time="2025-12-16T03:27:34.618128781Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=0" Dec 16 03:27:34.619116 containerd[1596]: time="2025-12-16T03:27:34.619054189Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:27:34.620939 containerd[1596]: time="2025-12-16T03:27:34.620890083Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:27:34.622505 containerd[1596]: time="2025-12-16T03:27:34.621587330Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 582.670711ms" Dec 16 03:27:34.622505 containerd[1596]: time="2025-12-16T03:27:34.621623373Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Dec 16 03:27:34.622505 containerd[1596]: time="2025-12-16T03:27:34.622187010Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Dec 16 03:27:35.193300 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1245454631.mount: Deactivated successfully. Dec 16 03:27:35.361340 systemd-resolved[1264]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Dec 16 03:27:39.494238 containerd[1596]: time="2025-12-16T03:27:39.494160624Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:27:39.495655 containerd[1596]: time="2025-12-16T03:27:39.495601608Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=72348001" Dec 16 03:27:39.496711 containerd[1596]: time="2025-12-16T03:27:39.496191822Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:27:39.499688 containerd[1596]: time="2025-12-16T03:27:39.499638589Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:27:39.501006 containerd[1596]: time="2025-12-16T03:27:39.500967587Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 4.87875068s" Dec 16 03:27:39.501006 containerd[1596]: time="2025-12-16T03:27:39.501008725Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Dec 16 03:27:42.618650 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:27:42.618000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:42.619437 systemd[1]: kubelet.service: Consumed 220ms CPU time, 110.4M memory peak. Dec 16 03:27:42.624193 kernel: audit: type=1130 audit(1765855662.618:292): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:42.624310 kernel: audit: type=1131 audit(1765855662.618:293): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:42.618000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:42.623387 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 03:27:42.664717 systemd[1]: Reload requested from client PID 2295 ('systemctl') (unit session-8.scope)... Dec 16 03:27:42.664908 systemd[1]: Reloading... Dec 16 03:27:42.812119 zram_generator::config[2341]: No configuration found. Dec 16 03:27:43.100203 systemd[1]: Reloading finished in 434 ms. Dec 16 03:27:43.136231 kernel: audit: type=1334 audit(1765855663.132:294): prog-id=63 op=LOAD Dec 16 03:27:43.132000 audit: BPF prog-id=63 op=LOAD Dec 16 03:27:43.132000 audit: BPF prog-id=60 op=UNLOAD Dec 16 03:27:43.139181 kernel: audit: type=1334 audit(1765855663.132:295): prog-id=60 op=UNLOAD Dec 16 03:27:43.132000 audit: BPF prog-id=64 op=LOAD Dec 16 03:27:43.142173 kernel: audit: type=1334 audit(1765855663.132:296): prog-id=64 op=LOAD Dec 16 03:27:43.143203 kernel: audit: type=1334 audit(1765855663.132:297): prog-id=65 op=LOAD Dec 16 03:27:43.132000 audit: BPF prog-id=65 op=LOAD Dec 16 03:27:43.133000 audit: BPF prog-id=61 op=UNLOAD Dec 16 03:27:43.147261 kernel: audit: type=1334 audit(1765855663.133:298): prog-id=61 op=UNLOAD Dec 16 03:27:43.133000 audit: BPF prog-id=62 op=UNLOAD Dec 16 03:27:43.150174 kernel: audit: type=1334 audit(1765855663.133:299): prog-id=62 op=UNLOAD Dec 16 03:27:43.150259 kernel: audit: type=1334 audit(1765855663.135:300): prog-id=66 op=LOAD Dec 16 03:27:43.135000 audit: BPF prog-id=66 op=LOAD Dec 16 03:27:43.135000 audit: BPF prog-id=46 op=UNLOAD Dec 16 03:27:43.136000 audit: BPF prog-id=67 op=LOAD Dec 16 03:27:43.155181 kernel: audit: type=1334 audit(1765855663.135:301): prog-id=46 op=UNLOAD Dec 16 03:27:43.136000 audit: BPF prog-id=68 op=LOAD Dec 16 03:27:43.136000 audit: BPF prog-id=47 op=UNLOAD Dec 16 03:27:43.136000 audit: BPF prog-id=48 op=UNLOAD Dec 16 03:27:43.136000 audit: BPF prog-id=69 op=LOAD Dec 16 03:27:43.137000 audit: BPF prog-id=59 op=UNLOAD Dec 16 03:27:43.139000 audit: BPF prog-id=70 op=LOAD Dec 16 03:27:43.139000 audit: BPF prog-id=71 op=LOAD Dec 16 03:27:43.139000 audit: BPF prog-id=49 op=UNLOAD Dec 16 03:27:43.139000 audit: BPF prog-id=50 op=UNLOAD Dec 16 03:27:43.140000 audit: BPF prog-id=72 op=LOAD Dec 16 03:27:43.140000 audit: BPF prog-id=43 op=UNLOAD Dec 16 03:27:43.140000 audit: BPF prog-id=73 op=LOAD Dec 16 03:27:43.140000 audit: BPF prog-id=74 op=LOAD Dec 16 03:27:43.140000 audit: BPF prog-id=44 op=UNLOAD Dec 16 03:27:43.140000 audit: BPF prog-id=45 op=UNLOAD Dec 16 03:27:43.141000 audit: BPF prog-id=75 op=LOAD Dec 16 03:27:43.141000 audit: BPF prog-id=51 op=UNLOAD Dec 16 03:27:43.143000 audit: BPF prog-id=76 op=LOAD Dec 16 03:27:43.143000 audit: BPF prog-id=58 op=UNLOAD Dec 16 03:27:43.145000 audit: BPF prog-id=77 op=LOAD Dec 16 03:27:43.145000 audit: BPF prog-id=55 op=UNLOAD Dec 16 03:27:43.145000 audit: BPF prog-id=78 op=LOAD Dec 16 03:27:43.145000 audit: BPF prog-id=79 op=LOAD Dec 16 03:27:43.145000 audit: BPF prog-id=56 op=UNLOAD Dec 16 03:27:43.145000 audit: BPF prog-id=57 op=UNLOAD Dec 16 03:27:43.146000 audit: BPF prog-id=80 op=LOAD Dec 16 03:27:43.146000 audit: BPF prog-id=52 op=UNLOAD Dec 16 03:27:43.146000 audit: BPF prog-id=81 op=LOAD Dec 16 03:27:43.146000 audit: BPF prog-id=82 op=LOAD Dec 16 03:27:43.146000 audit: BPF prog-id=53 op=UNLOAD Dec 16 03:27:43.146000 audit: BPF prog-id=54 op=UNLOAD Dec 16 03:27:43.171253 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 03:27:43.171429 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 03:27:43.172267 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:27:43.172450 systemd[1]: kubelet.service: Consumed 124ms CPU time, 97.8M memory peak. Dec 16 03:27:43.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 03:27:43.176714 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 03:27:43.365853 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:27:43.365000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:43.384025 (kubelet)[2395]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 03:27:43.442852 kubelet[2395]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 03:27:43.442852 kubelet[2395]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 03:27:43.442852 kubelet[2395]: I1216 03:27:43.442174 2395 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 03:27:44.489720 kubelet[2395]: I1216 03:27:44.489513 2395 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Dec 16 03:27:44.489720 kubelet[2395]: I1216 03:27:44.489561 2395 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 03:27:44.491943 kubelet[2395]: I1216 03:27:44.491908 2395 watchdog_linux.go:95] "Systemd watchdog is not enabled" Dec 16 03:27:44.492111 kubelet[2395]: I1216 03:27:44.492097 2395 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 03:27:44.492769 kubelet[2395]: I1216 03:27:44.492707 2395 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 03:27:44.501974 kubelet[2395]: I1216 03:27:44.501933 2395 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 03:27:44.509621 kubelet[2395]: E1216 03:27:44.509150 2395 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://144.126.212.19:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 144.126.212.19:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 16 03:27:44.519351 kubelet[2395]: I1216 03:27:44.519310 2395 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 03:27:44.531829 kubelet[2395]: I1216 03:27:44.531770 2395 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Dec 16 03:27:44.533490 kubelet[2395]: I1216 03:27:44.533185 2395 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 03:27:44.534909 kubelet[2395]: I1216 03:27:44.533250 2395 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4547.0.0-8-fbad3a37dc","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 03:27:44.535262 kubelet[2395]: I1216 03:27:44.535242 2395 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 03:27:44.535327 kubelet[2395]: I1216 03:27:44.535319 2395 container_manager_linux.go:306] "Creating device plugin manager" Dec 16 03:27:44.535542 kubelet[2395]: I1216 03:27:44.535530 2395 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Dec 16 03:27:44.540201 kubelet[2395]: I1216 03:27:44.540160 2395 state_mem.go:36] "Initialized new in-memory state store" Dec 16 03:27:44.540771 kubelet[2395]: I1216 03:27:44.540748 2395 kubelet.go:475] "Attempting to sync node with API server" Dec 16 03:27:44.541835 kubelet[2395]: I1216 03:27:44.541395 2395 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 03:27:44.541835 kubelet[2395]: I1216 03:27:44.541440 2395 kubelet.go:387] "Adding apiserver pod source" Dec 16 03:27:44.541835 kubelet[2395]: I1216 03:27:44.541463 2395 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 03:27:44.541835 kubelet[2395]: E1216 03:27:44.541576 2395 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://144.126.212.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4547.0.0-8-fbad3a37dc&limit=500&resourceVersion=0\": dial tcp 144.126.212.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 03:27:44.545886 kubelet[2395]: I1216 03:27:44.545854 2395 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 16 03:27:44.549755 kubelet[2395]: I1216 03:27:44.549708 2395 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 03:27:44.549939 kubelet[2395]: I1216 03:27:44.549929 2395 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Dec 16 03:27:44.550053 kubelet[2395]: W1216 03:27:44.550043 2395 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 03:27:44.550487 kubelet[2395]: E1216 03:27:44.550448 2395 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://144.126.212.19:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 144.126.212.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 03:27:44.558670 kubelet[2395]: I1216 03:27:44.558636 2395 server.go:1262] "Started kubelet" Dec 16 03:27:44.561142 kubelet[2395]: I1216 03:27:44.560969 2395 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 03:27:44.566589 kubelet[2395]: E1216 03:27:44.564843 2395 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://144.126.212.19:6443/api/v1/namespaces/default/events\": dial tcp 144.126.212.19:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4547.0.0-8-fbad3a37dc.1881945efaa003ac default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4547.0.0-8-fbad3a37dc,UID:ci-4547.0.0-8-fbad3a37dc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4547.0.0-8-fbad3a37dc,},FirstTimestamp:2025-12-16 03:27:44.55857246 +0000 UTC m=+1.169072086,LastTimestamp:2025-12-16 03:27:44.55857246 +0000 UTC m=+1.169072086,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4547.0.0-8-fbad3a37dc,}" Dec 16 03:27:44.568000 audit[2409]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2409 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:27:44.568000 audit[2409]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffeef6895f0 a2=0 a3=0 items=0 ppid=2395 pid=2409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:44.568000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 03:27:44.571000 audit[2410]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2410 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:27:44.571000 audit[2410]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe63c2caf0 a2=0 a3=0 items=0 ppid=2395 pid=2410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:44.571000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 16 03:27:44.575018 kubelet[2395]: I1216 03:27:44.574765 2395 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 03:27:44.581139 kubelet[2395]: I1216 03:27:44.580857 2395 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 03:27:44.581139 kubelet[2395]: I1216 03:27:44.580944 2395 server_v1.go:49] "podresources" method="list" useActivePods=true Dec 16 03:27:44.582047 kubelet[2395]: I1216 03:27:44.581887 2395 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 03:27:44.584122 kubelet[2395]: I1216 03:27:44.583561 2395 server.go:310] "Adding debug handlers to kubelet server" Dec 16 03:27:44.584664 kubelet[2395]: I1216 03:27:44.584615 2395 volume_manager.go:313] "Starting Kubelet Volume Manager" Dec 16 03:27:44.585158 kubelet[2395]: E1216 03:27:44.585123 2395 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4547.0.0-8-fbad3a37dc\" not found" Dec 16 03:27:44.591136 kubelet[2395]: I1216 03:27:44.590207 2395 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 03:27:44.590000 audit[2414]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2414 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:27:44.590000 audit[2414]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff0ac60610 a2=0 a3=0 items=0 ppid=2395 pid=2414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:44.590000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 03:27:44.595133 kubelet[2395]: I1216 03:27:44.595063 2395 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 16 03:27:44.595320 kubelet[2395]: I1216 03:27:44.595179 2395 reconciler.go:29] "Reconciler: start to sync state" Dec 16 03:27:44.595000 audit[2416]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2416 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:27:44.595000 audit[2416]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffd2aa3deb0 a2=0 a3=0 items=0 ppid=2395 pid=2416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:44.595000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 03:27:44.598594 kubelet[2395]: E1216 03:27:44.598562 2395 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://144.126.212.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547.0.0-8-fbad3a37dc?timeout=10s\": dial tcp 144.126.212.19:6443: connect: connection refused" interval="200ms" Dec 16 03:27:44.599312 kubelet[2395]: E1216 03:27:44.599282 2395 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 03:27:44.600141 kubelet[2395]: E1216 03:27:44.600119 2395 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://144.126.212.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 144.126.212.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 03:27:44.600324 kubelet[2395]: I1216 03:27:44.600311 2395 factory.go:223] Registration of the containerd container factory successfully Dec 16 03:27:44.600409 kubelet[2395]: I1216 03:27:44.600401 2395 factory.go:223] Registration of the systemd container factory successfully Dec 16 03:27:44.600536 kubelet[2395]: I1216 03:27:44.600521 2395 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 03:27:44.612000 audit[2419]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2419 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:27:44.612000 audit[2419]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffef22ba640 a2=0 a3=0 items=0 ppid=2395 pid=2419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:44.612000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F380000002D2D737263003132372E Dec 16 03:27:44.613661 kubelet[2395]: I1216 03:27:44.613613 2395 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Dec 16 03:27:44.613000 audit[2421]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2421 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:27:44.613000 audit[2421]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffcabd5c7e0 a2=0 a3=0 items=0 ppid=2395 pid=2421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:44.613000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 03:27:44.615232 kubelet[2395]: I1216 03:27:44.615203 2395 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Dec 16 03:27:44.615232 kubelet[2395]: I1216 03:27:44.615230 2395 status_manager.go:244] "Starting to sync pod status with apiserver" Dec 16 03:27:44.615330 kubelet[2395]: I1216 03:27:44.615263 2395 kubelet.go:2427] "Starting kubelet main sync loop" Dec 16 03:27:44.615330 kubelet[2395]: E1216 03:27:44.615311 2395 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 03:27:44.615000 audit[2422]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2422 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:27:44.615000 audit[2422]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd3ca89bc0 a2=0 a3=0 items=0 ppid=2395 pid=2422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:44.615000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 16 03:27:44.617000 audit[2423]: NETFILTER_CFG table=nat:49 family=2 entries=1 op=nft_register_chain pid=2423 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:27:44.617000 audit[2423]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdd8ca1f60 a2=0 a3=0 items=0 ppid=2395 pid=2423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:44.617000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 16 03:27:44.619000 audit[2425]: NETFILTER_CFG table=mangle:50 family=10 entries=1 op=nft_register_chain pid=2425 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:27:44.619000 audit[2425]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd60d095e0 a2=0 a3=0 items=0 ppid=2395 pid=2425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:44.619000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 16 03:27:44.619000 audit[2424]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_chain pid=2424 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:27:44.619000 audit[2424]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc3229ac70 a2=0 a3=0 items=0 ppid=2395 pid=2424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:44.619000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 16 03:27:44.623328 kubelet[2395]: E1216 03:27:44.623286 2395 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://144.126.212.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 144.126.212.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 03:27:44.621000 audit[2426]: NETFILTER_CFG table=nat:52 family=10 entries=1 op=nft_register_chain pid=2426 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:27:44.621000 audit[2426]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdcc2aaed0 a2=0 a3=0 items=0 ppid=2395 pid=2426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:44.621000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 16 03:27:44.625000 audit[2429]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2429 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:27:44.625000 audit[2429]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff3f8bc280 a2=0 a3=0 items=0 ppid=2395 pid=2429 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:44.625000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 16 03:27:44.634935 kubelet[2395]: I1216 03:27:44.634884 2395 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 03:27:44.634935 kubelet[2395]: I1216 03:27:44.634908 2395 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 03:27:44.634935 kubelet[2395]: I1216 03:27:44.634953 2395 state_mem.go:36] "Initialized new in-memory state store" Dec 16 03:27:44.636726 kubelet[2395]: I1216 03:27:44.636685 2395 policy_none.go:49] "None policy: Start" Dec 16 03:27:44.636726 kubelet[2395]: I1216 03:27:44.636731 2395 memory_manager.go:187] "Starting memorymanager" policy="None" Dec 16 03:27:44.636924 kubelet[2395]: I1216 03:27:44.636751 2395 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Dec 16 03:27:44.638053 kubelet[2395]: I1216 03:27:44.638005 2395 policy_none.go:47] "Start" Dec 16 03:27:44.644169 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 03:27:44.665917 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 03:27:44.670776 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 03:27:44.684345 kubelet[2395]: E1216 03:27:44.684007 2395 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 03:27:44.685655 kubelet[2395]: I1216 03:27:44.684889 2395 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 03:27:44.685655 kubelet[2395]: I1216 03:27:44.684908 2395 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 03:27:44.685655 kubelet[2395]: I1216 03:27:44.685400 2395 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 03:27:44.688401 kubelet[2395]: E1216 03:27:44.688368 2395 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 03:27:44.688513 kubelet[2395]: E1216 03:27:44.688418 2395 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4547.0.0-8-fbad3a37dc\" not found" Dec 16 03:27:44.731302 systemd[1]: Created slice kubepods-burstable-podc3f86be2489750620b5b8733a3b05375.slice - libcontainer container kubepods-burstable-podc3f86be2489750620b5b8733a3b05375.slice. Dec 16 03:27:44.734176 kubelet[2395]: E1216 03:27:44.733999 2395 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://144.126.212.19:6443/api/v1/namespaces/default/events\": dial tcp 144.126.212.19:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4547.0.0-8-fbad3a37dc.1881945efaa003ac default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4547.0.0-8-fbad3a37dc,UID:ci-4547.0.0-8-fbad3a37dc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4547.0.0-8-fbad3a37dc,},FirstTimestamp:2025-12-16 03:27:44.55857246 +0000 UTC m=+1.169072086,LastTimestamp:2025-12-16 03:27:44.55857246 +0000 UTC m=+1.169072086,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4547.0.0-8-fbad3a37dc,}" Dec 16 03:27:44.744652 kubelet[2395]: E1216 03:27:44.744406 2395 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-8-fbad3a37dc\" not found" node="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:27:44.749484 systemd[1]: Created slice kubepods-burstable-podf73ab487df9299f57c20ddb2184f443d.slice - libcontainer container kubepods-burstable-podf73ab487df9299f57c20ddb2184f443d.slice. Dec 16 03:27:44.752117 kubelet[2395]: E1216 03:27:44.751828 2395 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-8-fbad3a37dc\" not found" node="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:27:44.764132 systemd[1]: Created slice kubepods-burstable-pod5c8d4ae905b4741b53cdeb83e44e4017.slice - libcontainer container kubepods-burstable-pod5c8d4ae905b4741b53cdeb83e44e4017.slice. Dec 16 03:27:44.766688 kubelet[2395]: E1216 03:27:44.766654 2395 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-8-fbad3a37dc\" not found" node="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:27:44.787037 kubelet[2395]: I1216 03:27:44.786946 2395 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:27:44.787516 kubelet[2395]: E1216 03:27:44.787481 2395 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://144.126.212.19:6443/api/v1/nodes\": dial tcp 144.126.212.19:6443: connect: connection refused" node="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:27:44.796447 kubelet[2395]: I1216 03:27:44.796347 2395 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f73ab487df9299f57c20ddb2184f443d-flexvolume-dir\") pod \"kube-controller-manager-ci-4547.0.0-8-fbad3a37dc\" (UID: \"f73ab487df9299f57c20ddb2184f443d\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-8-fbad3a37dc" Dec 16 03:27:44.796447 kubelet[2395]: I1216 03:27:44.796407 2395 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f73ab487df9299f57c20ddb2184f443d-k8s-certs\") pod \"kube-controller-manager-ci-4547.0.0-8-fbad3a37dc\" (UID: \"f73ab487df9299f57c20ddb2184f443d\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-8-fbad3a37dc" Dec 16 03:27:44.796447 kubelet[2395]: I1216 03:27:44.796460 2395 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f73ab487df9299f57c20ddb2184f443d-kubeconfig\") pod \"kube-controller-manager-ci-4547.0.0-8-fbad3a37dc\" (UID: \"f73ab487df9299f57c20ddb2184f443d\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-8-fbad3a37dc" Dec 16 03:27:44.796696 kubelet[2395]: I1216 03:27:44.796496 2395 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c3f86be2489750620b5b8733a3b05375-k8s-certs\") pod \"kube-apiserver-ci-4547.0.0-8-fbad3a37dc\" (UID: \"c3f86be2489750620b5b8733a3b05375\") " pod="kube-system/kube-apiserver-ci-4547.0.0-8-fbad3a37dc" Dec 16 03:27:44.796696 kubelet[2395]: I1216 03:27:44.796523 2395 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f73ab487df9299f57c20ddb2184f443d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4547.0.0-8-fbad3a37dc\" (UID: \"f73ab487df9299f57c20ddb2184f443d\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-8-fbad3a37dc" Dec 16 03:27:44.796696 kubelet[2395]: I1216 03:27:44.796549 2395 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5c8d4ae905b4741b53cdeb83e44e4017-kubeconfig\") pod \"kube-scheduler-ci-4547.0.0-8-fbad3a37dc\" (UID: \"5c8d4ae905b4741b53cdeb83e44e4017\") " pod="kube-system/kube-scheduler-ci-4547.0.0-8-fbad3a37dc" Dec 16 03:27:44.796696 kubelet[2395]: I1216 03:27:44.796572 2395 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c3f86be2489750620b5b8733a3b05375-ca-certs\") pod \"kube-apiserver-ci-4547.0.0-8-fbad3a37dc\" (UID: \"c3f86be2489750620b5b8733a3b05375\") " pod="kube-system/kube-apiserver-ci-4547.0.0-8-fbad3a37dc" Dec 16 03:27:44.796696 kubelet[2395]: I1216 03:27:44.796597 2395 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c3f86be2489750620b5b8733a3b05375-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4547.0.0-8-fbad3a37dc\" (UID: \"c3f86be2489750620b5b8733a3b05375\") " pod="kube-system/kube-apiserver-ci-4547.0.0-8-fbad3a37dc" Dec 16 03:27:44.796811 kubelet[2395]: I1216 03:27:44.796653 2395 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f73ab487df9299f57c20ddb2184f443d-ca-certs\") pod \"kube-controller-manager-ci-4547.0.0-8-fbad3a37dc\" (UID: \"f73ab487df9299f57c20ddb2184f443d\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-8-fbad3a37dc" Dec 16 03:27:44.799907 kubelet[2395]: E1216 03:27:44.799853 2395 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://144.126.212.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547.0.0-8-fbad3a37dc?timeout=10s\": dial tcp 144.126.212.19:6443: connect: connection refused" interval="400ms" Dec 16 03:27:44.989659 kubelet[2395]: I1216 03:27:44.989610 2395 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:27:44.990312 kubelet[2395]: E1216 03:27:44.990273 2395 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://144.126.212.19:6443/api/v1/nodes\": dial tcp 144.126.212.19:6443: connect: connection refused" node="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:27:45.047697 kubelet[2395]: E1216 03:27:45.047605 2395 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:27:45.049046 containerd[1596]: time="2025-12-16T03:27:45.048812690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4547.0.0-8-fbad3a37dc,Uid:c3f86be2489750620b5b8733a3b05375,Namespace:kube-system,Attempt:0,}" Dec 16 03:27:45.053859 kubelet[2395]: E1216 03:27:45.053272 2395 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:27:45.054459 containerd[1596]: time="2025-12-16T03:27:45.054376734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4547.0.0-8-fbad3a37dc,Uid:f73ab487df9299f57c20ddb2184f443d,Namespace:kube-system,Attempt:0,}" Dec 16 03:27:45.064427 systemd-resolved[1264]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Dec 16 03:27:45.068802 kubelet[2395]: E1216 03:27:45.068747 2395 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:27:45.069825 containerd[1596]: time="2025-12-16T03:27:45.069632399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4547.0.0-8-fbad3a37dc,Uid:5c8d4ae905b4741b53cdeb83e44e4017,Namespace:kube-system,Attempt:0,}" Dec 16 03:27:45.200895 kubelet[2395]: E1216 03:27:45.200844 2395 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://144.126.212.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547.0.0-8-fbad3a37dc?timeout=10s\": dial tcp 144.126.212.19:6443: connect: connection refused" interval="800ms" Dec 16 03:27:45.392505 kubelet[2395]: I1216 03:27:45.391823 2395 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:27:45.392505 kubelet[2395]: E1216 03:27:45.392263 2395 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://144.126.212.19:6443/api/v1/nodes\": dial tcp 144.126.212.19:6443: connect: connection refused" node="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:27:45.443376 kubelet[2395]: E1216 03:27:45.443317 2395 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://144.126.212.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 144.126.212.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 03:27:45.489887 kubelet[2395]: E1216 03:27:45.489839 2395 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://144.126.212.19:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 144.126.212.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 03:27:45.584699 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2255083266.mount: Deactivated successfully. Dec 16 03:27:45.589401 containerd[1596]: time="2025-12-16T03:27:45.589324171Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 03:27:45.591041 containerd[1596]: time="2025-12-16T03:27:45.590327655Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 03:27:45.591828 containerd[1596]: time="2025-12-16T03:27:45.591801941Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 16 03:27:45.592267 containerd[1596]: time="2025-12-16T03:27:45.592240979Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 16 03:27:45.592513 containerd[1596]: time="2025-12-16T03:27:45.592485197Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 03:27:45.593025 containerd[1596]: time="2025-12-16T03:27:45.592999833Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 03:27:45.593525 containerd[1596]: time="2025-12-16T03:27:45.593492084Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 16 03:27:45.595126 containerd[1596]: time="2025-12-16T03:27:45.595002588Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 03:27:45.596766 containerd[1596]: time="2025-12-16T03:27:45.596738145Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 533.913474ms" Dec 16 03:27:45.597762 containerd[1596]: time="2025-12-16T03:27:45.597722858Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 534.328925ms" Dec 16 03:27:45.598887 containerd[1596]: time="2025-12-16T03:27:45.598839309Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 527.755417ms" Dec 16 03:27:45.705218 containerd[1596]: time="2025-12-16T03:27:45.704288845Z" level=info msg="connecting to shim 3028fa42b964d477b66350c61e4d12293267975c7ffa51ac1a993e8b06e2b8da" address="unix:///run/containerd/s/963070be97de703b0e3b5212391728daddacd1995df9e89de79fb7db0100d187" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:27:45.709344 containerd[1596]: time="2025-12-16T03:27:45.709292451Z" level=info msg="connecting to shim 7d6d2e97480be4994fd292f41bac4a71d4965905864ad0c12d92d62e3a1ac946" address="unix:///run/containerd/s/7793454b6049d6a822f78a3969071247743505b7f6e6ced55d478255902b9185" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:27:45.714404 containerd[1596]: time="2025-12-16T03:27:45.714321498Z" level=info msg="connecting to shim 5ca3bf04b25409455c9183fdc5d32778fe7eb43d576fdec63bba1fc7faf30c80" address="unix:///run/containerd/s/699e8aa48f03f869233a0143403c2f019d1e8573ec6acf0d72213e017ad34ac2" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:27:45.818427 systemd[1]: Started cri-containerd-7d6d2e97480be4994fd292f41bac4a71d4965905864ad0c12d92d62e3a1ac946.scope - libcontainer container 7d6d2e97480be4994fd292f41bac4a71d4965905864ad0c12d92d62e3a1ac946. Dec 16 03:27:45.833247 systemd[1]: Started cri-containerd-3028fa42b964d477b66350c61e4d12293267975c7ffa51ac1a993e8b06e2b8da.scope - libcontainer container 3028fa42b964d477b66350c61e4d12293267975c7ffa51ac1a993e8b06e2b8da. Dec 16 03:27:45.836666 systemd[1]: Started cri-containerd-5ca3bf04b25409455c9183fdc5d32778fe7eb43d576fdec63bba1fc7faf30c80.scope - libcontainer container 5ca3bf04b25409455c9183fdc5d32778fe7eb43d576fdec63bba1fc7faf30c80. Dec 16 03:27:45.858000 audit: BPF prog-id=83 op=LOAD Dec 16 03:27:45.861000 audit: BPF prog-id=84 op=LOAD Dec 16 03:27:45.861000 audit[2494]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2459 pid=2494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:45.861000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563613362663034623235343039343535633931383366646335643332 Dec 16 03:27:45.861000 audit: BPF prog-id=84 op=UNLOAD Dec 16 03:27:45.861000 audit[2494]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2459 pid=2494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:45.861000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563613362663034623235343039343535633931383366646335643332 Dec 16 03:27:45.862000 audit: BPF prog-id=85 op=LOAD Dec 16 03:27:45.864000 audit: BPF prog-id=86 op=LOAD Dec 16 03:27:45.864000 audit[2492]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2455 pid=2492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:45.864000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330323866613432623936346434373762363633353063363165346431 Dec 16 03:27:45.864000 audit: BPF prog-id=86 op=UNLOAD Dec 16 03:27:45.864000 audit[2492]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2455 pid=2492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:45.864000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330323866613432623936346434373762363633353063363165346431 Dec 16 03:27:45.866000 audit: BPF prog-id=87 op=LOAD Dec 16 03:27:45.866000 audit[2492]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2455 pid=2492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:45.866000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330323866613432623936346434373762363633353063363165346431 Dec 16 03:27:45.867000 audit: BPF prog-id=88 op=LOAD Dec 16 03:27:45.867000 audit[2492]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2455 pid=2492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:45.867000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330323866613432623936346434373762363633353063363165346431 Dec 16 03:27:45.867000 audit: BPF prog-id=88 op=UNLOAD Dec 16 03:27:45.867000 audit[2492]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2455 pid=2492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:45.867000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330323866613432623936346434373762363633353063363165346431 Dec 16 03:27:45.867000 audit: BPF prog-id=87 op=UNLOAD Dec 16 03:27:45.867000 audit[2492]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2455 pid=2492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:45.867000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330323866613432623936346434373762363633353063363165346431 Dec 16 03:27:45.867000 audit: BPF prog-id=89 op=LOAD Dec 16 03:27:45.867000 audit[2492]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2455 pid=2492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:45.867000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330323866613432623936346434373762363633353063363165346431 Dec 16 03:27:45.868000 audit: BPF prog-id=90 op=LOAD Dec 16 03:27:45.868000 audit[2494]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2459 pid=2494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:45.868000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563613362663034623235343039343535633931383366646335643332 Dec 16 03:27:45.868000 audit: BPF prog-id=91 op=LOAD Dec 16 03:27:45.868000 audit[2494]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2459 pid=2494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:45.868000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563613362663034623235343039343535633931383366646335643332 Dec 16 03:27:45.869000 audit: BPF prog-id=91 op=UNLOAD Dec 16 03:27:45.869000 audit[2494]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2459 pid=2494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:45.869000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563613362663034623235343039343535633931383366646335643332 Dec 16 03:27:45.869000 audit: BPF prog-id=90 op=UNLOAD Dec 16 03:27:45.869000 audit[2494]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2459 pid=2494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:45.869000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563613362663034623235343039343535633931383366646335643332 Dec 16 03:27:45.869000 audit: BPF prog-id=92 op=LOAD Dec 16 03:27:45.869000 audit[2494]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2459 pid=2494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:45.869000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563613362663034623235343039343535633931383366646335643332 Dec 16 03:27:45.871000 audit: BPF prog-id=93 op=LOAD Dec 16 03:27:45.872000 audit: BPF prog-id=94 op=LOAD Dec 16 03:27:45.872000 audit[2486]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2461 pid=2486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:45.872000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764366432653937343830626534393934666432393266343162616334 Dec 16 03:27:45.872000 audit: BPF prog-id=94 op=UNLOAD Dec 16 03:27:45.872000 audit[2486]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2461 pid=2486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:45.872000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764366432653937343830626534393934666432393266343162616334 Dec 16 03:27:45.873000 audit: BPF prog-id=95 op=LOAD Dec 16 03:27:45.873000 audit[2486]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2461 pid=2486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:45.873000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764366432653937343830626534393934666432393266343162616334 Dec 16 03:27:45.873000 audit: BPF prog-id=96 op=LOAD Dec 16 03:27:45.873000 audit[2486]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2461 pid=2486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:45.873000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764366432653937343830626534393934666432393266343162616334 Dec 16 03:27:45.873000 audit: BPF prog-id=96 op=UNLOAD Dec 16 03:27:45.873000 audit[2486]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2461 pid=2486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:45.873000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764366432653937343830626534393934666432393266343162616334 Dec 16 03:27:45.873000 audit: BPF prog-id=95 op=UNLOAD Dec 16 03:27:45.873000 audit[2486]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2461 pid=2486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:45.873000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764366432653937343830626534393934666432393266343162616334 Dec 16 03:27:45.873000 audit: BPF prog-id=97 op=LOAD Dec 16 03:27:45.873000 audit[2486]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2461 pid=2486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:45.873000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764366432653937343830626534393934666432393266343162616334 Dec 16 03:27:45.910405 kubelet[2395]: E1216 03:27:45.910348 2395 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://144.126.212.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 144.126.212.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 03:27:45.941116 containerd[1596]: time="2025-12-16T03:27:45.940757375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4547.0.0-8-fbad3a37dc,Uid:c3f86be2489750620b5b8733a3b05375,Namespace:kube-system,Attempt:0,} returns sandbox id \"3028fa42b964d477b66350c61e4d12293267975c7ffa51ac1a993e8b06e2b8da\"" Dec 16 03:27:45.944629 kubelet[2395]: E1216 03:27:45.944583 2395 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:27:45.950861 containerd[1596]: time="2025-12-16T03:27:45.950793365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4547.0.0-8-fbad3a37dc,Uid:f73ab487df9299f57c20ddb2184f443d,Namespace:kube-system,Attempt:0,} returns sandbox id \"5ca3bf04b25409455c9183fdc5d32778fe7eb43d576fdec63bba1fc7faf30c80\"" Dec 16 03:27:45.952800 kubelet[2395]: E1216 03:27:45.952763 2395 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:27:45.958328 containerd[1596]: time="2025-12-16T03:27:45.958033350Z" level=info msg="CreateContainer within sandbox \"3028fa42b964d477b66350c61e4d12293267975c7ffa51ac1a993e8b06e2b8da\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 16 03:27:45.962718 containerd[1596]: time="2025-12-16T03:27:45.962682486Z" level=info msg="CreateContainer within sandbox \"5ca3bf04b25409455c9183fdc5d32778fe7eb43d576fdec63bba1fc7faf30c80\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 16 03:27:45.972915 kubelet[2395]: E1216 03:27:45.972870 2395 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://144.126.212.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4547.0.0-8-fbad3a37dc&limit=500&resourceVersion=0\": dial tcp 144.126.212.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 03:27:45.979517 containerd[1596]: time="2025-12-16T03:27:45.979473274Z" level=info msg="Container 99ce3eada401b0800ab339059100621a962ff2a8c8761269b535bff2276f76b1: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:27:45.981332 containerd[1596]: time="2025-12-16T03:27:45.981285152Z" level=info msg="Container 52824127be56b3d089d859ca2b221c8056d7e8e8f81f69e10a5801562e8f1b19: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:27:45.991724 containerd[1596]: time="2025-12-16T03:27:45.991677459Z" level=info msg="CreateContainer within sandbox \"5ca3bf04b25409455c9183fdc5d32778fe7eb43d576fdec63bba1fc7faf30c80\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"52824127be56b3d089d859ca2b221c8056d7e8e8f81f69e10a5801562e8f1b19\"" Dec 16 03:27:45.992606 containerd[1596]: time="2025-12-16T03:27:45.992547057Z" level=info msg="StartContainer for \"52824127be56b3d089d859ca2b221c8056d7e8e8f81f69e10a5801562e8f1b19\"" Dec 16 03:27:45.993119 containerd[1596]: time="2025-12-16T03:27:45.993068178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4547.0.0-8-fbad3a37dc,Uid:5c8d4ae905b4741b53cdeb83e44e4017,Namespace:kube-system,Attempt:0,} returns sandbox id \"7d6d2e97480be4994fd292f41bac4a71d4965905864ad0c12d92d62e3a1ac946\"" Dec 16 03:27:45.994920 containerd[1596]: time="2025-12-16T03:27:45.994024115Z" level=info msg="CreateContainer within sandbox \"3028fa42b964d477b66350c61e4d12293267975c7ffa51ac1a993e8b06e2b8da\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"99ce3eada401b0800ab339059100621a962ff2a8c8761269b535bff2276f76b1\"" Dec 16 03:27:45.994920 containerd[1596]: time="2025-12-16T03:27:45.994650390Z" level=info msg="StartContainer for \"99ce3eada401b0800ab339059100621a962ff2a8c8761269b535bff2276f76b1\"" Dec 16 03:27:45.994920 containerd[1596]: time="2025-12-16T03:27:45.994844669Z" level=info msg="connecting to shim 52824127be56b3d089d859ca2b221c8056d7e8e8f81f69e10a5801562e8f1b19" address="unix:///run/containerd/s/699e8aa48f03f869233a0143403c2f019d1e8573ec6acf0d72213e017ad34ac2" protocol=ttrpc version=3 Dec 16 03:27:45.995824 kubelet[2395]: E1216 03:27:45.995771 2395 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:27:45.998389 containerd[1596]: time="2025-12-16T03:27:45.998132192Z" level=info msg="connecting to shim 99ce3eada401b0800ab339059100621a962ff2a8c8761269b535bff2276f76b1" address="unix:///run/containerd/s/963070be97de703b0e3b5212391728daddacd1995df9e89de79fb7db0100d187" protocol=ttrpc version=3 Dec 16 03:27:45.999593 containerd[1596]: time="2025-12-16T03:27:45.999409662Z" level=info msg="CreateContainer within sandbox \"7d6d2e97480be4994fd292f41bac4a71d4965905864ad0c12d92d62e3a1ac946\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 16 03:27:46.002425 kubelet[2395]: E1216 03:27:46.002387 2395 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://144.126.212.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547.0.0-8-fbad3a37dc?timeout=10s\": dial tcp 144.126.212.19:6443: connect: connection refused" interval="1.6s" Dec 16 03:27:46.008359 containerd[1596]: time="2025-12-16T03:27:46.008289768Z" level=info msg="Container 9225c9a3e360764681a17b59fdff70eb9989c170514232a442b06857ddd02498: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:27:46.017505 containerd[1596]: time="2025-12-16T03:27:46.017380839Z" level=info msg="CreateContainer within sandbox \"7d6d2e97480be4994fd292f41bac4a71d4965905864ad0c12d92d62e3a1ac946\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9225c9a3e360764681a17b59fdff70eb9989c170514232a442b06857ddd02498\"" Dec 16 03:27:46.019622 containerd[1596]: time="2025-12-16T03:27:46.019436762Z" level=info msg="StartContainer for \"9225c9a3e360764681a17b59fdff70eb9989c170514232a442b06857ddd02498\"" Dec 16 03:27:46.027501 containerd[1596]: time="2025-12-16T03:27:46.027406398Z" level=info msg="connecting to shim 9225c9a3e360764681a17b59fdff70eb9989c170514232a442b06857ddd02498" address="unix:///run/containerd/s/7793454b6049d6a822f78a3969071247743505b7f6e6ced55d478255902b9185" protocol=ttrpc version=3 Dec 16 03:27:46.039449 systemd[1]: Started cri-containerd-99ce3eada401b0800ab339059100621a962ff2a8c8761269b535bff2276f76b1.scope - libcontainer container 99ce3eada401b0800ab339059100621a962ff2a8c8761269b535bff2276f76b1. Dec 16 03:27:46.054482 systemd[1]: Started cri-containerd-52824127be56b3d089d859ca2b221c8056d7e8e8f81f69e10a5801562e8f1b19.scope - libcontainer container 52824127be56b3d089d859ca2b221c8056d7e8e8f81f69e10a5801562e8f1b19. Dec 16 03:27:46.079469 systemd[1]: Started cri-containerd-9225c9a3e360764681a17b59fdff70eb9989c170514232a442b06857ddd02498.scope - libcontainer container 9225c9a3e360764681a17b59fdff70eb9989c170514232a442b06857ddd02498. Dec 16 03:27:46.081000 audit: BPF prog-id=98 op=LOAD Dec 16 03:27:46.083000 audit: BPF prog-id=99 op=LOAD Dec 16 03:27:46.083000 audit[2580]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000220238 a2=98 a3=0 items=0 ppid=2455 pid=2580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:46.083000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939636533656164613430316230383030616233333930353931303036 Dec 16 03:27:46.083000 audit: BPF prog-id=99 op=UNLOAD Dec 16 03:27:46.083000 audit[2580]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2455 pid=2580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:46.083000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939636533656164613430316230383030616233333930353931303036 Dec 16 03:27:46.083000 audit: BPF prog-id=100 op=LOAD Dec 16 03:27:46.083000 audit[2580]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000220488 a2=98 a3=0 items=0 ppid=2455 pid=2580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:46.083000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939636533656164613430316230383030616233333930353931303036 Dec 16 03:27:46.084000 audit: BPF prog-id=101 op=LOAD Dec 16 03:27:46.084000 audit[2580]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000220218 a2=98 a3=0 items=0 ppid=2455 pid=2580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:46.084000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939636533656164613430316230383030616233333930353931303036 Dec 16 03:27:46.085000 audit: BPF prog-id=101 op=UNLOAD Dec 16 03:27:46.085000 audit[2580]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2455 pid=2580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:46.085000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939636533656164613430316230383030616233333930353931303036 Dec 16 03:27:46.085000 audit: BPF prog-id=100 op=UNLOAD Dec 16 03:27:46.085000 audit[2580]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2455 pid=2580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:46.085000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939636533656164613430316230383030616233333930353931303036 Dec 16 03:27:46.085000 audit: BPF prog-id=102 op=LOAD Dec 16 03:27:46.085000 audit[2580]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0002206e8 a2=98 a3=0 items=0 ppid=2455 pid=2580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:46.085000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939636533656164613430316230383030616233333930353931303036 Dec 16 03:27:46.106000 audit: BPF prog-id=103 op=LOAD Dec 16 03:27:46.107000 audit: BPF prog-id=104 op=LOAD Dec 16 03:27:46.107000 audit[2579]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2459 pid=2579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:46.107000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532383234313237626535366233643038396438353963613262323231 Dec 16 03:27:46.109000 audit: BPF prog-id=104 op=UNLOAD Dec 16 03:27:46.109000 audit[2579]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2459 pid=2579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:46.109000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532383234313237626535366233643038396438353963613262323231 Dec 16 03:27:46.109000 audit: BPF prog-id=105 op=LOAD Dec 16 03:27:46.109000 audit[2579]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2459 pid=2579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:46.109000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532383234313237626535366233643038396438353963613262323231 Dec 16 03:27:46.109000 audit: BPF prog-id=106 op=LOAD Dec 16 03:27:46.109000 audit[2579]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2459 pid=2579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:46.109000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532383234313237626535366233643038396438353963613262323231 Dec 16 03:27:46.109000 audit: BPF prog-id=106 op=UNLOAD Dec 16 03:27:46.109000 audit[2579]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2459 pid=2579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:46.109000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532383234313237626535366233643038396438353963613262323231 Dec 16 03:27:46.109000 audit: BPF prog-id=105 op=UNLOAD Dec 16 03:27:46.109000 audit[2579]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2459 pid=2579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:46.109000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532383234313237626535366233643038396438353963613262323231 Dec 16 03:27:46.109000 audit: BPF prog-id=107 op=LOAD Dec 16 03:27:46.109000 audit[2579]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2459 pid=2579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:46.109000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532383234313237626535366233643038396438353963613262323231 Dec 16 03:27:46.130000 audit: BPF prog-id=108 op=LOAD Dec 16 03:27:46.132000 audit: BPF prog-id=109 op=LOAD Dec 16 03:27:46.132000 audit[2607]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=2461 pid=2607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:46.132000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932323563396133653336303736343638316131376235396664666637 Dec 16 03:27:46.133000 audit: BPF prog-id=109 op=UNLOAD Dec 16 03:27:46.133000 audit[2607]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2461 pid=2607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:46.133000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932323563396133653336303736343638316131376235396664666637 Dec 16 03:27:46.135000 audit: BPF prog-id=110 op=LOAD Dec 16 03:27:46.135000 audit[2607]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=2461 pid=2607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:46.135000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932323563396133653336303736343638316131376235396664666637 Dec 16 03:27:46.135000 audit: BPF prog-id=111 op=LOAD Dec 16 03:27:46.135000 audit[2607]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=2461 pid=2607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:46.135000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932323563396133653336303736343638316131376235396664666637 Dec 16 03:27:46.135000 audit: BPF prog-id=111 op=UNLOAD Dec 16 03:27:46.135000 audit[2607]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2461 pid=2607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:46.135000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932323563396133653336303736343638316131376235396664666637 Dec 16 03:27:46.135000 audit: BPF prog-id=110 op=UNLOAD Dec 16 03:27:46.135000 audit[2607]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2461 pid=2607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:46.135000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932323563396133653336303736343638316131376235396664666637 Dec 16 03:27:46.136000 audit: BPF prog-id=112 op=LOAD Dec 16 03:27:46.136000 audit[2607]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=2461 pid=2607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:46.136000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932323563396133653336303736343638316131376235396664666637 Dec 16 03:27:46.156766 containerd[1596]: time="2025-12-16T03:27:46.156573762Z" level=info msg="StartContainer for \"99ce3eada401b0800ab339059100621a962ff2a8c8761269b535bff2276f76b1\" returns successfully" Dec 16 03:27:46.194030 kubelet[2395]: I1216 03:27:46.193992 2395 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:27:46.195705 kubelet[2395]: E1216 03:27:46.195645 2395 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://144.126.212.19:6443/api/v1/nodes\": dial tcp 144.126.212.19:6443: connect: connection refused" node="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:27:46.209635 containerd[1596]: time="2025-12-16T03:27:46.209439767Z" level=info msg="StartContainer for \"52824127be56b3d089d859ca2b221c8056d7e8e8f81f69e10a5801562e8f1b19\" returns successfully" Dec 16 03:27:46.238287 containerd[1596]: time="2025-12-16T03:27:46.238229616Z" level=info msg="StartContainer for \"9225c9a3e360764681a17b59fdff70eb9989c170514232a442b06857ddd02498\" returns successfully" Dec 16 03:27:46.639587 kubelet[2395]: E1216 03:27:46.639551 2395 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-8-fbad3a37dc\" not found" node="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:27:46.641352 kubelet[2395]: E1216 03:27:46.639743 2395 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:27:46.645057 kubelet[2395]: E1216 03:27:46.644834 2395 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-8-fbad3a37dc\" not found" node="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:27:46.645057 kubelet[2395]: E1216 03:27:46.645003 2395 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:27:46.647191 kubelet[2395]: E1216 03:27:46.647167 2395 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-8-fbad3a37dc\" not found" node="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:27:46.648113 kubelet[2395]: E1216 03:27:46.647475 2395 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:27:47.651328 kubelet[2395]: E1216 03:27:47.650755 2395 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-8-fbad3a37dc\" not found" node="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:27:47.651328 kubelet[2395]: E1216 03:27:47.650907 2395 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:27:47.651328 kubelet[2395]: E1216 03:27:47.651116 2395 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-8-fbad3a37dc\" not found" node="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:27:47.651328 kubelet[2395]: E1216 03:27:47.651257 2395 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:27:47.798113 kubelet[2395]: I1216 03:27:47.797784 2395 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:27:48.740977 kubelet[2395]: E1216 03:27:48.740642 2395 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-8-fbad3a37dc\" not found" node="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:27:48.740977 kubelet[2395]: E1216 03:27:48.740807 2395 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:27:49.245181 kubelet[2395]: E1216 03:27:49.245110 2395 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4547.0.0-8-fbad3a37dc\" not found" node="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:27:49.274066 kubelet[2395]: I1216 03:27:49.273997 2395 kubelet_node_status.go:78] "Successfully registered node" node="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:27:49.289743 kubelet[2395]: I1216 03:27:49.289696 2395 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4547.0.0-8-fbad3a37dc" Dec 16 03:27:49.305434 kubelet[2395]: E1216 03:27:49.305361 2395 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4547.0.0-8-fbad3a37dc\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4547.0.0-8-fbad3a37dc" Dec 16 03:27:49.305434 kubelet[2395]: I1216 03:27:49.305405 2395 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547.0.0-8-fbad3a37dc" Dec 16 03:27:49.310133 kubelet[2395]: E1216 03:27:49.310075 2395 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4547.0.0-8-fbad3a37dc\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4547.0.0-8-fbad3a37dc" Dec 16 03:27:49.310133 kubelet[2395]: I1216 03:27:49.310120 2395 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547.0.0-8-fbad3a37dc" Dec 16 03:27:49.317155 kubelet[2395]: E1216 03:27:49.315604 2395 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4547.0.0-8-fbad3a37dc\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4547.0.0-8-fbad3a37dc" Dec 16 03:27:49.546374 kubelet[2395]: I1216 03:27:49.546307 2395 apiserver.go:52] "Watching apiserver" Dec 16 03:27:49.595736 kubelet[2395]: I1216 03:27:49.595618 2395 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 16 03:27:51.383663 systemd[1]: Reload requested from client PID 2685 ('systemctl') (unit session-8.scope)... Dec 16 03:27:51.383687 systemd[1]: Reloading... Dec 16 03:27:51.538129 zram_generator::config[2734]: No configuration found. Dec 16 03:27:51.814443 kubelet[2395]: I1216 03:27:51.814389 2395 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547.0.0-8-fbad3a37dc" Dec 16 03:27:51.822182 kubelet[2395]: I1216 03:27:51.822099 2395 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 03:27:51.822710 kubelet[2395]: E1216 03:27:51.822626 2395 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:27:51.855072 systemd[1]: Reloading finished in 470 ms. Dec 16 03:27:51.860965 kubelet[2395]: I1216 03:27:51.860907 2395 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547.0.0-8-fbad3a37dc" Dec 16 03:27:51.873770 kubelet[2395]: I1216 03:27:51.873584 2395 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 03:27:51.874443 kubelet[2395]: E1216 03:27:51.874411 2395 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:27:51.894626 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 03:27:51.911013 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 03:27:51.913145 kernel: kauditd_printk_skb: 202 callbacks suppressed Dec 16 03:27:51.913292 kernel: audit: type=1131 audit(1765855671.910:396): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:51.910000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:51.911672 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:27:51.915588 systemd[1]: kubelet.service: Consumed 1.657s CPU time, 121.3M memory peak. Dec 16 03:27:51.919471 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 03:27:51.923735 kernel: audit: type=1334 audit(1765855671.919:397): prog-id=113 op=LOAD Dec 16 03:27:51.923883 kernel: audit: type=1334 audit(1765855671.919:398): prog-id=72 op=UNLOAD Dec 16 03:27:51.919000 audit: BPF prog-id=113 op=LOAD Dec 16 03:27:51.919000 audit: BPF prog-id=72 op=UNLOAD Dec 16 03:27:51.919000 audit: BPF prog-id=114 op=LOAD Dec 16 03:27:51.930699 kernel: audit: type=1334 audit(1765855671.919:399): prog-id=114 op=LOAD Dec 16 03:27:51.930902 kernel: audit: type=1334 audit(1765855671.919:400): prog-id=115 op=LOAD Dec 16 03:27:51.930937 kernel: audit: type=1334 audit(1765855671.919:401): prog-id=73 op=UNLOAD Dec 16 03:27:51.919000 audit: BPF prog-id=115 op=LOAD Dec 16 03:27:51.919000 audit: BPF prog-id=73 op=UNLOAD Dec 16 03:27:51.933143 kernel: audit: type=1334 audit(1765855671.919:402): prog-id=74 op=UNLOAD Dec 16 03:27:51.919000 audit: BPF prog-id=74 op=UNLOAD Dec 16 03:27:51.920000 audit: BPF prog-id=116 op=LOAD Dec 16 03:27:51.935165 kernel: audit: type=1334 audit(1765855671.920:403): prog-id=116 op=LOAD Dec 16 03:27:51.920000 audit: BPF prog-id=76 op=UNLOAD Dec 16 03:27:51.937138 kernel: audit: type=1334 audit(1765855671.920:404): prog-id=76 op=UNLOAD Dec 16 03:27:51.921000 audit: BPF prog-id=117 op=LOAD Dec 16 03:27:51.939105 kernel: audit: type=1334 audit(1765855671.921:405): prog-id=117 op=LOAD Dec 16 03:27:51.921000 audit: BPF prog-id=80 op=UNLOAD Dec 16 03:27:51.921000 audit: BPF prog-id=118 op=LOAD Dec 16 03:27:51.921000 audit: BPF prog-id=119 op=LOAD Dec 16 03:27:51.921000 audit: BPF prog-id=81 op=UNLOAD Dec 16 03:27:51.921000 audit: BPF prog-id=82 op=UNLOAD Dec 16 03:27:51.922000 audit: BPF prog-id=120 op=LOAD Dec 16 03:27:51.922000 audit: BPF prog-id=69 op=UNLOAD Dec 16 03:27:51.924000 audit: BPF prog-id=121 op=LOAD Dec 16 03:27:51.924000 audit: BPF prog-id=77 op=UNLOAD Dec 16 03:27:51.924000 audit: BPF prog-id=122 op=LOAD Dec 16 03:27:51.924000 audit: BPF prog-id=123 op=LOAD Dec 16 03:27:51.924000 audit: BPF prog-id=78 op=UNLOAD Dec 16 03:27:51.924000 audit: BPF prog-id=79 op=UNLOAD Dec 16 03:27:51.927000 audit: BPF prog-id=124 op=LOAD Dec 16 03:27:51.927000 audit: BPF prog-id=63 op=UNLOAD Dec 16 03:27:51.928000 audit: BPF prog-id=125 op=LOAD Dec 16 03:27:51.928000 audit: BPF prog-id=126 op=LOAD Dec 16 03:27:51.928000 audit: BPF prog-id=64 op=UNLOAD Dec 16 03:27:51.928000 audit: BPF prog-id=65 op=UNLOAD Dec 16 03:27:51.928000 audit: BPF prog-id=127 op=LOAD Dec 16 03:27:51.928000 audit: BPF prog-id=66 op=UNLOAD Dec 16 03:27:51.928000 audit: BPF prog-id=128 op=LOAD Dec 16 03:27:51.928000 audit: BPF prog-id=129 op=LOAD Dec 16 03:27:51.928000 audit: BPF prog-id=67 op=UNLOAD Dec 16 03:27:51.928000 audit: BPF prog-id=68 op=UNLOAD Dec 16 03:27:51.930000 audit: BPF prog-id=130 op=LOAD Dec 16 03:27:51.930000 audit: BPF prog-id=131 op=LOAD Dec 16 03:27:51.930000 audit: BPF prog-id=70 op=UNLOAD Dec 16 03:27:51.930000 audit: BPF prog-id=71 op=UNLOAD Dec 16 03:27:51.931000 audit: BPF prog-id=132 op=LOAD Dec 16 03:27:51.931000 audit: BPF prog-id=75 op=UNLOAD Dec 16 03:27:52.111042 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:27:52.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:52.123641 (kubelet)[2782]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 03:27:52.207610 kubelet[2782]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 03:27:52.207610 kubelet[2782]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 03:27:52.209474 kubelet[2782]: I1216 03:27:52.207816 2782 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 03:27:52.218473 kubelet[2782]: I1216 03:27:52.218430 2782 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Dec 16 03:27:52.218659 kubelet[2782]: I1216 03:27:52.218650 2782 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 03:27:52.220765 kubelet[2782]: I1216 03:27:52.220724 2782 watchdog_linux.go:95] "Systemd watchdog is not enabled" Dec 16 03:27:52.221623 kubelet[2782]: I1216 03:27:52.220937 2782 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 03:27:52.221623 kubelet[2782]: I1216 03:27:52.221375 2782 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 03:27:52.230360 kubelet[2782]: I1216 03:27:52.230317 2782 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 16 03:27:52.236573 kubelet[2782]: I1216 03:27:52.236529 2782 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 03:27:52.246047 kubelet[2782]: I1216 03:27:52.246014 2782 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 03:27:52.250622 kubelet[2782]: I1216 03:27:52.250530 2782 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Dec 16 03:27:52.254629 kubelet[2782]: I1216 03:27:52.253706 2782 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 03:27:52.254629 kubelet[2782]: I1216 03:27:52.253809 2782 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4547.0.0-8-fbad3a37dc","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 03:27:52.254629 kubelet[2782]: I1216 03:27:52.254073 2782 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 03:27:52.254629 kubelet[2782]: I1216 03:27:52.254129 2782 container_manager_linux.go:306] "Creating device plugin manager" Dec 16 03:27:52.255171 kubelet[2782]: I1216 03:27:52.254206 2782 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Dec 16 03:27:52.257973 kubelet[2782]: I1216 03:27:52.257930 2782 state_mem.go:36] "Initialized new in-memory state store" Dec 16 03:27:52.263043 kubelet[2782]: I1216 03:27:52.262987 2782 kubelet.go:475] "Attempting to sync node with API server" Dec 16 03:27:52.263887 kubelet[2782]: I1216 03:27:52.263827 2782 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 03:27:52.266020 kubelet[2782]: I1216 03:27:52.265988 2782 kubelet.go:387] "Adding apiserver pod source" Dec 16 03:27:52.267698 kubelet[2782]: I1216 03:27:52.267449 2782 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 03:27:52.275555 kubelet[2782]: I1216 03:27:52.275501 2782 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 16 03:27:52.282491 kubelet[2782]: I1216 03:27:52.282366 2782 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 03:27:52.282757 kubelet[2782]: I1216 03:27:52.282723 2782 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Dec 16 03:27:52.297444 kubelet[2782]: I1216 03:27:52.297412 2782 server.go:1262] "Started kubelet" Dec 16 03:27:52.298673 kubelet[2782]: I1216 03:27:52.297833 2782 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 03:27:52.298932 kubelet[2782]: I1216 03:27:52.298910 2782 server_v1.go:49] "podresources" method="list" useActivePods=true Dec 16 03:27:52.299126 kubelet[2782]: I1216 03:27:52.298477 2782 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 03:27:52.304171 kubelet[2782]: I1216 03:27:52.304140 2782 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 03:27:52.313826 kubelet[2782]: I1216 03:27:52.304741 2782 server.go:310] "Adding debug handlers to kubelet server" Dec 16 03:27:52.325415 kubelet[2782]: I1216 03:27:52.325379 2782 factory.go:223] Registration of the systemd container factory successfully Dec 16 03:27:52.325734 kubelet[2782]: I1216 03:27:52.325712 2782 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 03:27:52.327679 kubelet[2782]: I1216 03:27:52.305820 2782 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 03:27:52.328207 kubelet[2782]: I1216 03:27:52.315214 2782 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 16 03:27:52.330155 kubelet[2782]: I1216 03:27:52.306045 2782 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 03:27:52.330155 kubelet[2782]: I1216 03:27:52.315198 2782 volume_manager.go:313] "Starting Kubelet Volume Manager" Dec 16 03:27:52.333130 kubelet[2782]: I1216 03:27:52.333066 2782 reconciler.go:29] "Reconciler: start to sync state" Dec 16 03:27:52.342074 kubelet[2782]: I1216 03:27:52.342039 2782 factory.go:223] Registration of the containerd container factory successfully Dec 16 03:27:52.348397 kubelet[2782]: E1216 03:27:52.348354 2782 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 03:27:52.391133 kubelet[2782]: I1216 03:27:52.389965 2782 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Dec 16 03:27:52.398632 kubelet[2782]: I1216 03:27:52.398591 2782 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Dec 16 03:27:52.399487 kubelet[2782]: I1216 03:27:52.399459 2782 status_manager.go:244] "Starting to sync pod status with apiserver" Dec 16 03:27:52.399746 kubelet[2782]: I1216 03:27:52.399733 2782 kubelet.go:2427] "Starting kubelet main sync loop" Dec 16 03:27:52.403200 kubelet[2782]: E1216 03:27:52.399912 2782 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 03:27:52.488784 kubelet[2782]: I1216 03:27:52.488744 2782 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 03:27:52.488784 kubelet[2782]: I1216 03:27:52.488770 2782 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 03:27:52.488784 kubelet[2782]: I1216 03:27:52.488800 2782 state_mem.go:36] "Initialized new in-memory state store" Dec 16 03:27:52.489011 kubelet[2782]: I1216 03:27:52.488990 2782 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 16 03:27:52.489042 kubelet[2782]: I1216 03:27:52.489010 2782 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 16 03:27:52.489042 kubelet[2782]: I1216 03:27:52.489036 2782 policy_none.go:49] "None policy: Start" Dec 16 03:27:52.489100 kubelet[2782]: I1216 03:27:52.489050 2782 memory_manager.go:187] "Starting memorymanager" policy="None" Dec 16 03:27:52.489100 kubelet[2782]: I1216 03:27:52.489064 2782 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Dec 16 03:27:52.489256 kubelet[2782]: I1216 03:27:52.489226 2782 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Dec 16 03:27:52.489256 kubelet[2782]: I1216 03:27:52.489243 2782 policy_none.go:47] "Start" Dec 16 03:27:52.501754 kubelet[2782]: E1216 03:27:52.500665 2782 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 03:27:52.501754 kubelet[2782]: I1216 03:27:52.500940 2782 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 03:27:52.501754 kubelet[2782]: I1216 03:27:52.500956 2782 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 03:27:52.508291 kubelet[2782]: I1216 03:27:52.507362 2782 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 03:27:52.510067 kubelet[2782]: E1216 03:27:52.509505 2782 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 03:27:52.515962 kubelet[2782]: I1216 03:27:52.515872 2782 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547.0.0-8-fbad3a37dc" Dec 16 03:27:52.534489 kubelet[2782]: I1216 03:27:52.534418 2782 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4547.0.0-8-fbad3a37dc" Dec 16 03:27:52.538323 kubelet[2782]: I1216 03:27:52.537930 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c3f86be2489750620b5b8733a3b05375-ca-certs\") pod \"kube-apiserver-ci-4547.0.0-8-fbad3a37dc\" (UID: \"c3f86be2489750620b5b8733a3b05375\") " pod="kube-system/kube-apiserver-ci-4547.0.0-8-fbad3a37dc" Dec 16 03:27:52.538323 kubelet[2782]: I1216 03:27:52.538003 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c3f86be2489750620b5b8733a3b05375-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4547.0.0-8-fbad3a37dc\" (UID: \"c3f86be2489750620b5b8733a3b05375\") " pod="kube-system/kube-apiserver-ci-4547.0.0-8-fbad3a37dc" Dec 16 03:27:52.538323 kubelet[2782]: I1216 03:27:52.538037 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f73ab487df9299f57c20ddb2184f443d-ca-certs\") pod \"kube-controller-manager-ci-4547.0.0-8-fbad3a37dc\" (UID: \"f73ab487df9299f57c20ddb2184f443d\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-8-fbad3a37dc" Dec 16 03:27:52.538323 kubelet[2782]: I1216 03:27:52.538078 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f73ab487df9299f57c20ddb2184f443d-flexvolume-dir\") pod \"kube-controller-manager-ci-4547.0.0-8-fbad3a37dc\" (UID: \"f73ab487df9299f57c20ddb2184f443d\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-8-fbad3a37dc" Dec 16 03:27:52.538323 kubelet[2782]: I1216 03:27:52.538120 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f73ab487df9299f57c20ddb2184f443d-kubeconfig\") pod \"kube-controller-manager-ci-4547.0.0-8-fbad3a37dc\" (UID: \"f73ab487df9299f57c20ddb2184f443d\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-8-fbad3a37dc" Dec 16 03:27:52.538605 kubelet[2782]: I1216 03:27:52.538158 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f73ab487df9299f57c20ddb2184f443d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4547.0.0-8-fbad3a37dc\" (UID: \"f73ab487df9299f57c20ddb2184f443d\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-8-fbad3a37dc" Dec 16 03:27:52.538605 kubelet[2782]: I1216 03:27:52.538210 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5c8d4ae905b4741b53cdeb83e44e4017-kubeconfig\") pod \"kube-scheduler-ci-4547.0.0-8-fbad3a37dc\" (UID: \"5c8d4ae905b4741b53cdeb83e44e4017\") " pod="kube-system/kube-scheduler-ci-4547.0.0-8-fbad3a37dc" Dec 16 03:27:52.538605 kubelet[2782]: I1216 03:27:52.538241 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c3f86be2489750620b5b8733a3b05375-k8s-certs\") pod \"kube-apiserver-ci-4547.0.0-8-fbad3a37dc\" (UID: \"c3f86be2489750620b5b8733a3b05375\") " pod="kube-system/kube-apiserver-ci-4547.0.0-8-fbad3a37dc" Dec 16 03:27:52.538605 kubelet[2782]: I1216 03:27:52.538282 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f73ab487df9299f57c20ddb2184f443d-k8s-certs\") pod \"kube-controller-manager-ci-4547.0.0-8-fbad3a37dc\" (UID: \"f73ab487df9299f57c20ddb2184f443d\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-8-fbad3a37dc" Dec 16 03:27:52.550743 kubelet[2782]: I1216 03:27:52.550701 2782 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547.0.0-8-fbad3a37dc" Dec 16 03:27:52.572979 kubelet[2782]: I1216 03:27:52.572942 2782 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 03:27:52.575214 kubelet[2782]: E1216 03:27:52.574710 2782 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4547.0.0-8-fbad3a37dc\" already exists" pod="kube-system/kube-scheduler-ci-4547.0.0-8-fbad3a37dc" Dec 16 03:27:52.582382 kubelet[2782]: I1216 03:27:52.581839 2782 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 03:27:52.582382 kubelet[2782]: E1216 03:27:52.581937 2782 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4547.0.0-8-fbad3a37dc\" already exists" pod="kube-system/kube-apiserver-ci-4547.0.0-8-fbad3a37dc" Dec 16 03:27:52.584125 kubelet[2782]: I1216 03:27:52.584063 2782 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 03:27:52.642366 kubelet[2782]: I1216 03:27:52.640039 2782 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:27:52.654289 kubelet[2782]: I1216 03:27:52.654039 2782 kubelet_node_status.go:124] "Node was previously registered" node="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:27:52.654289 kubelet[2782]: I1216 03:27:52.654259 2782 kubelet_node_status.go:78] "Successfully registered node" node="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:27:52.876960 kubelet[2782]: E1216 03:27:52.876667 2782 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:27:52.882924 kubelet[2782]: E1216 03:27:52.882861 2782 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:27:52.884773 kubelet[2782]: E1216 03:27:52.884580 2782 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:27:53.274211 kubelet[2782]: I1216 03:27:53.274143 2782 apiserver.go:52] "Watching apiserver" Dec 16 03:27:53.329363 kubelet[2782]: I1216 03:27:53.329310 2782 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 16 03:27:53.475398 kubelet[2782]: E1216 03:27:53.474715 2782 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:27:53.475398 kubelet[2782]: E1216 03:27:53.474715 2782 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:27:53.475398 kubelet[2782]: E1216 03:27:53.475021 2782 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:27:53.498083 kubelet[2782]: I1216 03:27:53.497990 2782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4547.0.0-8-fbad3a37dc" podStartSLOduration=2.497965364 podStartE2EDuration="2.497965364s" podCreationTimestamp="2025-12-16 03:27:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:27:53.496691546 +0000 UTC m=+1.363903913" watchObservedRunningTime="2025-12-16 03:27:53.497965364 +0000 UTC m=+1.365177719" Dec 16 03:27:53.498827 kubelet[2782]: I1216 03:27:53.498364 2782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4547.0.0-8-fbad3a37dc" podStartSLOduration=2.498165649 podStartE2EDuration="2.498165649s" podCreationTimestamp="2025-12-16 03:27:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:27:53.484288891 +0000 UTC m=+1.351501254" watchObservedRunningTime="2025-12-16 03:27:53.498165649 +0000 UTC m=+1.365377989" Dec 16 03:27:53.525712 kubelet[2782]: I1216 03:27:53.525425 2782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4547.0.0-8-fbad3a37dc" podStartSLOduration=1.5254060539999998 podStartE2EDuration="1.525406054s" podCreationTimestamp="2025-12-16 03:27:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:27:53.509417955 +0000 UTC m=+1.376630317" watchObservedRunningTime="2025-12-16 03:27:53.525406054 +0000 UTC m=+1.392618415" Dec 16 03:27:54.477555 kubelet[2782]: E1216 03:27:54.476726 2782 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:27:54.477555 kubelet[2782]: E1216 03:27:54.477453 2782 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:27:55.478877 kubelet[2782]: E1216 03:27:55.478836 2782 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:27:56.027644 kubelet[2782]: E1216 03:27:56.027566 2782 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:27:56.481276 kubelet[2782]: E1216 03:27:56.481216 2782 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:27:57.210037 kubelet[2782]: I1216 03:27:57.209695 2782 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 16 03:27:57.210562 containerd[1596]: time="2025-12-16T03:27:57.210452151Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 03:27:57.212014 kubelet[2782]: I1216 03:27:57.211988 2782 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 16 03:27:57.957906 systemd[1]: Created slice kubepods-besteffort-pod2fcf1552_7fbd_4850_948a_cd65ee7ebb59.slice - libcontainer container kubepods-besteffort-pod2fcf1552_7fbd_4850_948a_cd65ee7ebb59.slice. Dec 16 03:27:57.984506 kubelet[2782]: I1216 03:27:57.984343 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzpzj\" (UniqueName: \"kubernetes.io/projected/2fcf1552-7fbd-4850-948a-cd65ee7ebb59-kube-api-access-rzpzj\") pod \"kube-proxy-tvsnq\" (UID: \"2fcf1552-7fbd-4850-948a-cd65ee7ebb59\") " pod="kube-system/kube-proxy-tvsnq" Dec 16 03:27:57.984506 kubelet[2782]: I1216 03:27:57.984387 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2fcf1552-7fbd-4850-948a-cd65ee7ebb59-kube-proxy\") pod \"kube-proxy-tvsnq\" (UID: \"2fcf1552-7fbd-4850-948a-cd65ee7ebb59\") " pod="kube-system/kube-proxy-tvsnq" Dec 16 03:27:57.984506 kubelet[2782]: I1216 03:27:57.984409 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2fcf1552-7fbd-4850-948a-cd65ee7ebb59-xtables-lock\") pod \"kube-proxy-tvsnq\" (UID: \"2fcf1552-7fbd-4850-948a-cd65ee7ebb59\") " pod="kube-system/kube-proxy-tvsnq" Dec 16 03:27:57.984506 kubelet[2782]: I1216 03:27:57.984424 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2fcf1552-7fbd-4850-948a-cd65ee7ebb59-lib-modules\") pod \"kube-proxy-tvsnq\" (UID: \"2fcf1552-7fbd-4850-948a-cd65ee7ebb59\") " pod="kube-system/kube-proxy-tvsnq" Dec 16 03:27:58.096290 kubelet[2782]: E1216 03:27:58.096226 2782 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 16 03:27:58.096290 kubelet[2782]: E1216 03:27:58.096293 2782 projected.go:196] Error preparing data for projected volume kube-api-access-rzpzj for pod kube-system/kube-proxy-tvsnq: configmap "kube-root-ca.crt" not found Dec 16 03:27:58.096505 kubelet[2782]: E1216 03:27:58.096378 2782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2fcf1552-7fbd-4850-948a-cd65ee7ebb59-kube-api-access-rzpzj podName:2fcf1552-7fbd-4850-948a-cd65ee7ebb59 nodeName:}" failed. No retries permitted until 2025-12-16 03:27:58.596344821 +0000 UTC m=+6.463557162 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rzpzj" (UniqueName: "kubernetes.io/projected/2fcf1552-7fbd-4850-948a-cd65ee7ebb59-kube-api-access-rzpzj") pod "kube-proxy-tvsnq" (UID: "2fcf1552-7fbd-4850-948a-cd65ee7ebb59") : configmap "kube-root-ca.crt" not found Dec 16 03:27:58.487159 systemd[1]: Created slice kubepods-besteffort-pod61713379_9c4e_42e7_83d5_586008d2f155.slice - libcontainer container kubepods-besteffort-pod61713379_9c4e_42e7_83d5_586008d2f155.slice. Dec 16 03:27:58.588378 kubelet[2782]: I1216 03:27:58.588306 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmn82\" (UniqueName: \"kubernetes.io/projected/61713379-9c4e-42e7-83d5-586008d2f155-kube-api-access-kmn82\") pod \"tigera-operator-65cdcdfd6d-jbsf6\" (UID: \"61713379-9c4e-42e7-83d5-586008d2f155\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-jbsf6" Dec 16 03:27:58.588577 kubelet[2782]: I1216 03:27:58.588406 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/61713379-9c4e-42e7-83d5-586008d2f155-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-jbsf6\" (UID: \"61713379-9c4e-42e7-83d5-586008d2f155\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-jbsf6" Dec 16 03:27:58.796452 containerd[1596]: time="2025-12-16T03:27:58.796331291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-jbsf6,Uid:61713379-9c4e-42e7-83d5-586008d2f155,Namespace:tigera-operator,Attempt:0,}" Dec 16 03:27:58.820121 containerd[1596]: time="2025-12-16T03:27:58.819833178Z" level=info msg="connecting to shim ae7bb018ffd3cfac35397c11fcf14bc0b52fc48a841cc23faeb1586b01e45e98" address="unix:///run/containerd/s/e2598c305db1203b65fbbd8cb1968537a8aa8a985d918225b7a05a75c2499a3e" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:27:58.853517 systemd[1]: Started cri-containerd-ae7bb018ffd3cfac35397c11fcf14bc0b52fc48a841cc23faeb1586b01e45e98.scope - libcontainer container ae7bb018ffd3cfac35397c11fcf14bc0b52fc48a841cc23faeb1586b01e45e98. Dec 16 03:27:58.869000 audit: BPF prog-id=133 op=LOAD Dec 16 03:27:58.871499 kernel: kauditd_printk_skb: 32 callbacks suppressed Dec 16 03:27:58.871578 kernel: audit: type=1334 audit(1765855678.869:438): prog-id=133 op=LOAD Dec 16 03:27:58.872000 audit: BPF prog-id=134 op=LOAD Dec 16 03:27:58.872000 audit[2854]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2842 pid=2854 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:58.875580 kernel: audit: type=1334 audit(1765855678.872:439): prog-id=134 op=LOAD Dec 16 03:27:58.875652 kernel: audit: type=1300 audit(1765855678.872:439): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2842 pid=2854 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:58.880036 kubelet[2782]: E1216 03:27:58.879998 2782 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:27:58.882617 containerd[1596]: time="2025-12-16T03:27:58.882568906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tvsnq,Uid:2fcf1552-7fbd-4850-948a-cd65ee7ebb59,Namespace:kube-system,Attempt:0,}" Dec 16 03:27:58.872000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165376262303138666664336366616333353339376331316663663134 Dec 16 03:27:58.888113 kernel: audit: type=1327 audit(1765855678.872:439): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165376262303138666664336366616333353339376331316663663134 Dec 16 03:27:58.872000 audit: BPF prog-id=134 op=UNLOAD Dec 16 03:27:58.890116 kernel: audit: type=1334 audit(1765855678.872:440): prog-id=134 op=UNLOAD Dec 16 03:27:58.872000 audit[2854]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2842 pid=2854 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:58.894169 kernel: audit: type=1300 audit(1765855678.872:440): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2842 pid=2854 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:58.894299 kernel: audit: type=1327 audit(1765855678.872:440): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165376262303138666664336366616333353339376331316663663134 Dec 16 03:27:58.872000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165376262303138666664336366616333353339376331316663663134 Dec 16 03:27:58.872000 audit: BPF prog-id=135 op=LOAD Dec 16 03:27:58.872000 audit[2854]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2842 pid=2854 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:58.901446 kernel: audit: type=1334 audit(1765855678.872:441): prog-id=135 op=LOAD Dec 16 03:27:58.901496 kernel: audit: type=1300 audit(1765855678.872:441): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2842 pid=2854 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:58.872000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165376262303138666664336366616333353339376331316663663134 Dec 16 03:27:58.908209 kernel: audit: type=1327 audit(1765855678.872:441): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165376262303138666664336366616333353339376331316663663134 Dec 16 03:27:58.872000 audit: BPF prog-id=136 op=LOAD Dec 16 03:27:58.872000 audit[2854]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2842 pid=2854 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:58.872000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165376262303138666664336366616333353339376331316663663134 Dec 16 03:27:58.872000 audit: BPF prog-id=136 op=UNLOAD Dec 16 03:27:58.872000 audit[2854]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2842 pid=2854 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:58.872000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165376262303138666664336366616333353339376331316663663134 Dec 16 03:27:58.872000 audit: BPF prog-id=135 op=UNLOAD Dec 16 03:27:58.872000 audit[2854]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2842 pid=2854 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:58.872000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165376262303138666664336366616333353339376331316663663134 Dec 16 03:27:58.872000 audit: BPF prog-id=137 op=LOAD Dec 16 03:27:58.872000 audit[2854]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2842 pid=2854 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:58.872000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165376262303138666664336366616333353339376331316663663134 Dec 16 03:27:58.930116 containerd[1596]: time="2025-12-16T03:27:58.930052568Z" level=info msg="connecting to shim 4d09a6d50046643575a3c4c242309f58399651578c756dae63e7ece716abea00" address="unix:///run/containerd/s/c67b331f971ed4b441517616bca060c999d309153acd3a962a9e6bf79841466b" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:27:58.943424 containerd[1596]: time="2025-12-16T03:27:58.943369765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-jbsf6,Uid:61713379-9c4e-42e7-83d5-586008d2f155,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"ae7bb018ffd3cfac35397c11fcf14bc0b52fc48a841cc23faeb1586b01e45e98\"" Dec 16 03:27:58.945837 containerd[1596]: time="2025-12-16T03:27:58.945786852Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 16 03:27:58.981480 systemd[1]: Started cri-containerd-4d09a6d50046643575a3c4c242309f58399651578c756dae63e7ece716abea00.scope - libcontainer container 4d09a6d50046643575a3c4c242309f58399651578c756dae63e7ece716abea00. Dec 16 03:27:58.996000 audit: BPF prog-id=138 op=LOAD Dec 16 03:27:58.997000 audit: BPF prog-id=139 op=LOAD Dec 16 03:27:58.997000 audit[2896]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=2886 pid=2896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:58.997000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464303961366435303034363634333537356133633463323432333039 Dec 16 03:27:58.997000 audit: BPF prog-id=139 op=UNLOAD Dec 16 03:27:58.997000 audit[2896]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2886 pid=2896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:58.997000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464303961366435303034363634333537356133633463323432333039 Dec 16 03:27:58.997000 audit: BPF prog-id=140 op=LOAD Dec 16 03:27:58.997000 audit[2896]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=2886 pid=2896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:58.997000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464303961366435303034363634333537356133633463323432333039 Dec 16 03:27:58.997000 audit: BPF prog-id=141 op=LOAD Dec 16 03:27:58.997000 audit[2896]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=2886 pid=2896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:58.997000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464303961366435303034363634333537356133633463323432333039 Dec 16 03:27:58.997000 audit: BPF prog-id=141 op=UNLOAD Dec 16 03:27:58.997000 audit[2896]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2886 pid=2896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:58.997000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464303961366435303034363634333537356133633463323432333039 Dec 16 03:27:58.997000 audit: BPF prog-id=140 op=UNLOAD Dec 16 03:27:58.997000 audit[2896]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2886 pid=2896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:58.997000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464303961366435303034363634333537356133633463323432333039 Dec 16 03:27:58.997000 audit: BPF prog-id=142 op=LOAD Dec 16 03:27:58.997000 audit[2896]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=2886 pid=2896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:58.997000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464303961366435303034363634333537356133633463323432333039 Dec 16 03:27:59.022280 containerd[1596]: time="2025-12-16T03:27:59.022230608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tvsnq,Uid:2fcf1552-7fbd-4850-948a-cd65ee7ebb59,Namespace:kube-system,Attempt:0,} returns sandbox id \"4d09a6d50046643575a3c4c242309f58399651578c756dae63e7ece716abea00\"" Dec 16 03:27:59.023549 kubelet[2782]: E1216 03:27:59.023479 2782 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:27:59.037656 containerd[1596]: time="2025-12-16T03:27:59.036810850Z" level=info msg="CreateContainer within sandbox \"4d09a6d50046643575a3c4c242309f58399651578c756dae63e7ece716abea00\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 03:27:59.055068 containerd[1596]: time="2025-12-16T03:27:59.054941037Z" level=info msg="Container 5bc548b6ea674fa5d1b2b7672d6d783e01be3bc0913199011f174c5ba1a43aaf: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:27:59.068211 containerd[1596]: time="2025-12-16T03:27:59.067682357Z" level=info msg="CreateContainer within sandbox \"4d09a6d50046643575a3c4c242309f58399651578c756dae63e7ece716abea00\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5bc548b6ea674fa5d1b2b7672d6d783e01be3bc0913199011f174c5ba1a43aaf\"" Dec 16 03:27:59.068400 containerd[1596]: time="2025-12-16T03:27:59.068355040Z" level=info msg="StartContainer for \"5bc548b6ea674fa5d1b2b7672d6d783e01be3bc0913199011f174c5ba1a43aaf\"" Dec 16 03:27:59.070407 containerd[1596]: time="2025-12-16T03:27:59.070368661Z" level=info msg="connecting to shim 5bc548b6ea674fa5d1b2b7672d6d783e01be3bc0913199011f174c5ba1a43aaf" address="unix:///run/containerd/s/c67b331f971ed4b441517616bca060c999d309153acd3a962a9e6bf79841466b" protocol=ttrpc version=3 Dec 16 03:27:59.098444 systemd[1]: Started cri-containerd-5bc548b6ea674fa5d1b2b7672d6d783e01be3bc0913199011f174c5ba1a43aaf.scope - libcontainer container 5bc548b6ea674fa5d1b2b7672d6d783e01be3bc0913199011f174c5ba1a43aaf. Dec 16 03:27:59.162000 audit: BPF prog-id=143 op=LOAD Dec 16 03:27:59.162000 audit[2923]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000220488 a2=98 a3=0 items=0 ppid=2886 pid=2923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:59.162000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562633534386236656136373466613564316232623736373264366437 Dec 16 03:27:59.162000 audit: BPF prog-id=144 op=LOAD Dec 16 03:27:59.162000 audit[2923]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000220218 a2=98 a3=0 items=0 ppid=2886 pid=2923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:59.162000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562633534386236656136373466613564316232623736373264366437 Dec 16 03:27:59.162000 audit: BPF prog-id=144 op=UNLOAD Dec 16 03:27:59.162000 audit[2923]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2886 pid=2923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:59.162000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562633534386236656136373466613564316232623736373264366437 Dec 16 03:27:59.162000 audit: BPF prog-id=143 op=UNLOAD Dec 16 03:27:59.162000 audit[2923]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2886 pid=2923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:59.162000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562633534386236656136373466613564316232623736373264366437 Dec 16 03:27:59.162000 audit: BPF prog-id=145 op=LOAD Dec 16 03:27:59.162000 audit[2923]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0002206e8 a2=98 a3=0 items=0 ppid=2886 pid=2923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:59.162000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562633534386236656136373466613564316232623736373264366437 Dec 16 03:27:59.192373 containerd[1596]: time="2025-12-16T03:27:59.192318120Z" level=info msg="StartContainer for \"5bc548b6ea674fa5d1b2b7672d6d783e01be3bc0913199011f174c5ba1a43aaf\" returns successfully" Dec 16 03:27:59.493133 kubelet[2782]: E1216 03:27:59.491220 2782 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:27:59.506655 kubelet[2782]: I1216 03:27:59.506577 2782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tvsnq" podStartSLOduration=2.506556526 podStartE2EDuration="2.506556526s" podCreationTimestamp="2025-12-16 03:27:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:27:59.504794174 +0000 UTC m=+7.372006537" watchObservedRunningTime="2025-12-16 03:27:59.506556526 +0000 UTC m=+7.373768888" Dec 16 03:27:59.600000 audit[2986]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=2986 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:27:59.600000 audit[2986]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffeb3022c30 a2=0 a3=7ffeb3022c1c items=0 ppid=2935 pid=2986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:59.600000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 16 03:27:59.602000 audit[2988]: NETFILTER_CFG table=mangle:55 family=10 entries=1 op=nft_register_chain pid=2988 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:27:59.602000 audit[2988]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe7eebc9f0 a2=0 a3=7ffe7eebc9dc items=0 ppid=2935 pid=2988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:59.602000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 16 03:27:59.602000 audit[2989]: NETFILTER_CFG table=nat:56 family=2 entries=1 op=nft_register_chain pid=2989 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:27:59.602000 audit[2989]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe13ba5ea0 a2=0 a3=7ffe13ba5e8c items=0 ppid=2935 pid=2989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:59.602000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 16 03:27:59.604000 audit[2990]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_chain pid=2990 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:27:59.604000 audit[2990]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd81c6b270 a2=0 a3=7ffd81c6b25c items=0 ppid=2935 pid=2990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:59.604000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 16 03:27:59.604000 audit[2991]: NETFILTER_CFG table=nat:58 family=10 entries=1 op=nft_register_chain pid=2991 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:27:59.604000 audit[2991]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff705cde10 a2=0 a3=7fff705cddfc items=0 ppid=2935 pid=2991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:59.604000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 16 03:27:59.606000 audit[2992]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=2992 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:27:59.606000 audit[2992]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffce0e8fff0 a2=0 a3=7ffce0e8ffdc items=0 ppid=2935 pid=2992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:59.606000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 16 03:27:59.714000 audit[2995]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=2995 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:27:59.714000 audit[2995]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffdc9e4b3e0 a2=0 a3=7ffdc9e4b3cc items=0 ppid=2935 pid=2995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:59.714000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 16 03:27:59.720000 audit[2997]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=2997 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:27:59.720000 audit[2997]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff6ce81f70 a2=0 a3=7fff6ce81f5c items=0 ppid=2935 pid=2997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:59.720000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73002D Dec 16 03:27:59.725000 audit[3000]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3000 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:27:59.725000 audit[3000]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc4e3462a0 a2=0 a3=7ffc4e34628c items=0 ppid=2935 pid=3000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:59.725000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73 Dec 16 03:27:59.728000 audit[3001]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3001 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:27:59.728000 audit[3001]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe7322def0 a2=0 a3=7ffe7322dedc items=0 ppid=2935 pid=3001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:59.728000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 16 03:27:59.732000 audit[3003]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3003 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:27:59.732000 audit[3003]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff265aaad0 a2=0 a3=7fff265aaabc items=0 ppid=2935 pid=3003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:59.732000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 16 03:27:59.734000 audit[3004]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3004 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:27:59.734000 audit[3004]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffee7efb650 a2=0 a3=7ffee7efb63c items=0 ppid=2935 pid=3004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:59.734000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D5345525649434553002D740066696C746572 Dec 16 03:27:59.738000 audit[3006]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3006 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:27:59.738000 audit[3006]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe2dc855c0 a2=0 a3=7ffe2dc855ac items=0 ppid=2935 pid=3006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:59.738000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 03:27:59.743000 audit[3009]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3009 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:27:59.743000 audit[3009]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff332aff10 a2=0 a3=7fff332afefc items=0 ppid=2935 pid=3009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:59.743000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 03:27:59.746000 audit[3010]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3010 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:27:59.746000 audit[3010]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd7957b5e0 a2=0 a3=7ffd7957b5cc items=0 ppid=2935 pid=3010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:59.746000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D464F5257415244002D740066696C746572 Dec 16 03:27:59.750000 audit[3012]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3012 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:27:59.750000 audit[3012]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd0e2e07e0 a2=0 a3=7ffd0e2e07cc items=0 ppid=2935 pid=3012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:59.750000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 16 03:27:59.752000 audit[3013]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3013 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:27:59.752000 audit[3013]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe0165eaa0 a2=0 a3=7ffe0165ea8c items=0 ppid=2935 pid=3013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:59.752000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 16 03:27:59.755000 audit[3015]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3015 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:27:59.755000 audit[3015]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffe0dcd5c0 a2=0 a3=7fffe0dcd5ac items=0 ppid=2935 pid=3015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:59.755000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F5859 Dec 16 03:27:59.760000 audit[3018]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3018 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:27:59.760000 audit[3018]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe45697370 a2=0 a3=7ffe4569735c items=0 ppid=2935 pid=3018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:59.760000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F58 Dec 16 03:27:59.765000 audit[3021]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3021 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:27:59.765000 audit[3021]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffda1d69bd0 a2=0 a3=7ffda1d69bbc items=0 ppid=2935 pid=3021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:59.765000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F Dec 16 03:27:59.767000 audit[3022]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3022 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:27:59.767000 audit[3022]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe9e5f6f70 a2=0 a3=7ffe9e5f6f5c items=0 ppid=2935 pid=3022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:59.767000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D5345525649434553002D74006E6174 Dec 16 03:27:59.770000 audit[3024]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3024 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:27:59.770000 audit[3024]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffda5ee06e0 a2=0 a3=7ffda5ee06cc items=0 ppid=2935 pid=3024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:59.770000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 03:27:59.776000 audit[3027]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3027 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:27:59.776000 audit[3027]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd5570d930 a2=0 a3=7ffd5570d91c items=0 ppid=2935 pid=3027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:59.776000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 03:27:59.778000 audit[3028]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3028 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:27:59.778000 audit[3028]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff272346b0 a2=0 a3=7fff2723469c items=0 ppid=2935 pid=3028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:59.778000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 16 03:27:59.781000 audit[3030]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3030 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:27:59.781000 audit[3030]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffc79cc46e0 a2=0 a3=7ffc79cc46cc items=0 ppid=2935 pid=3030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:59.781000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 16 03:27:59.814000 audit[3036]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3036 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:27:59.814000 audit[3036]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe63b8ee00 a2=0 a3=7ffe63b8edec items=0 ppid=2935 pid=3036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:59.814000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:27:59.825000 audit[3036]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3036 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:27:59.825000 audit[3036]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffe63b8ee00 a2=0 a3=7ffe63b8edec items=0 ppid=2935 pid=3036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:59.825000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:27:59.829000 audit[3041]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3041 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:27:59.829000 audit[3041]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffec1525ed0 a2=0 a3=7ffec1525ebc items=0 ppid=2935 pid=3041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:59.829000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 16 03:27:59.836000 audit[3043]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3043 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:27:59.836000 audit[3043]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffd4bb5e310 a2=0 a3=7ffd4bb5e2fc items=0 ppid=2935 pid=3043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:59.836000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73 Dec 16 03:27:59.843000 audit[3046]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3046 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:27:59.843000 audit[3046]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffdf2d300e0 a2=0 a3=7ffdf2d300cc items=0 ppid=2935 pid=3046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:59.843000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C Dec 16 03:27:59.847000 audit[3047]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3047 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:27:59.847000 audit[3047]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffed7d9a100 a2=0 a3=7ffed7d9a0ec items=0 ppid=2935 pid=3047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:59.847000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 16 03:27:59.854000 audit[3049]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3049 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:27:59.854000 audit[3049]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe8bd85cc0 a2=0 a3=7ffe8bd85cac items=0 ppid=2935 pid=3049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:59.854000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 16 03:27:59.856000 audit[3050]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3050 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:27:59.856000 audit[3050]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdc7b967d0 a2=0 a3=7ffdc7b967bc items=0 ppid=2935 pid=3050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:59.856000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D5345525649434553002D740066696C746572 Dec 16 03:27:59.862000 audit[3052]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3052 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:27:59.862000 audit[3052]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffddf3bd370 a2=0 a3=7ffddf3bd35c items=0 ppid=2935 pid=3052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:59.862000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 03:27:59.872000 audit[3055]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3055 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:27:59.872000 audit[3055]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffc4a74bba0 a2=0 a3=7ffc4a74bb8c items=0 ppid=2935 pid=3055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:59.872000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 03:27:59.876000 audit[3056]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3056 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:27:59.876000 audit[3056]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd3ee2c810 a2=0 a3=7ffd3ee2c7fc items=0 ppid=2935 pid=3056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:59.876000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D464F5257415244002D740066696C746572 Dec 16 03:27:59.880000 audit[3058]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3058 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:27:59.880000 audit[3058]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcc6a15380 a2=0 a3=7ffcc6a1536c items=0 ppid=2935 pid=3058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:59.880000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 16 03:27:59.884000 audit[3059]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3059 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:27:59.884000 audit[3059]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd80e98170 a2=0 a3=7ffd80e9815c items=0 ppid=2935 pid=3059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:59.884000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 16 03:27:59.888000 audit[3061]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3061 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:27:59.888000 audit[3061]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffa63aeee0 a2=0 a3=7fffa63aeecc items=0 ppid=2935 pid=3061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:59.888000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F58 Dec 16 03:27:59.896000 audit[3064]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3064 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:27:59.896000 audit[3064]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffed65e9aa0 a2=0 a3=7ffed65e9a8c items=0 ppid=2935 pid=3064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:59.896000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F Dec 16 03:27:59.903000 audit[3067]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3067 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:27:59.903000 audit[3067]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe640a2aa0 a2=0 a3=7ffe640a2a8c items=0 ppid=2935 pid=3067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:59.903000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D5052 Dec 16 03:27:59.904000 audit[3068]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3068 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:27:59.904000 audit[3068]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffcf3c57710 a2=0 a3=7ffcf3c576fc items=0 ppid=2935 pid=3068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:59.904000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D5345525649434553002D74006E6174 Dec 16 03:27:59.908000 audit[3070]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3070 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:27:59.908000 audit[3070]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffee8207db0 a2=0 a3=7ffee8207d9c items=0 ppid=2935 pid=3070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:59.908000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 03:27:59.913000 audit[3073]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3073 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:27:59.913000 audit[3073]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffd3615bf0 a2=0 a3=7fffd3615bdc items=0 ppid=2935 pid=3073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:59.913000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 03:27:59.914000 audit[3074]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3074 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:27:59.914000 audit[3074]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffccc4170f0 a2=0 a3=7ffccc4170dc items=0 ppid=2935 pid=3074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:59.914000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 16 03:27:59.917000 audit[3076]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3076 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:27:59.917000 audit[3076]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffe8c61d310 a2=0 a3=7ffe8c61d2fc items=0 ppid=2935 pid=3076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:59.917000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 16 03:27:59.919000 audit[3077]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3077 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:27:59.919000 audit[3077]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffef43b4100 a2=0 a3=7ffef43b40ec items=0 ppid=2935 pid=3077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:59.919000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 16 03:27:59.922000 audit[3079]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3079 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:27:59.922000 audit[3079]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe9e70a9b0 a2=0 a3=7ffe9e70a99c items=0 ppid=2935 pid=3079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:59.922000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 03:27:59.927000 audit[3082]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3082 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:27:59.927000 audit[3082]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd7a3b1700 a2=0 a3=7ffd7a3b16ec items=0 ppid=2935 pid=3082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:59.927000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 03:27:59.935000 audit[3084]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3084 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 16 03:27:59.935000 audit[3084]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffe98726940 a2=0 a3=7ffe9872692c items=0 ppid=2935 pid=3084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:59.935000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:27:59.936000 audit[3084]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3084 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 16 03:27:59.936000 audit[3084]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffe98726940 a2=0 a3=7ffe9872692c items=0 ppid=2935 pid=3084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:59.936000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:28:00.328976 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount421946330.mount: Deactivated successfully. Dec 16 03:28:01.160634 containerd[1596]: time="2025-12-16T03:28:01.160566475Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:28:01.162175 containerd[1596]: time="2025-12-16T03:28:01.162119846Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Dec 16 03:28:01.162683 containerd[1596]: time="2025-12-16T03:28:01.162650847Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:28:01.164942 containerd[1596]: time="2025-12-16T03:28:01.164827719Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:28:01.166043 containerd[1596]: time="2025-12-16T03:28:01.166005853Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.220177699s" Dec 16 03:28:01.166043 containerd[1596]: time="2025-12-16T03:28:01.166037790Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Dec 16 03:28:01.170180 containerd[1596]: time="2025-12-16T03:28:01.170130185Z" level=info msg="CreateContainer within sandbox \"ae7bb018ffd3cfac35397c11fcf14bc0b52fc48a841cc23faeb1586b01e45e98\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 16 03:28:01.177483 containerd[1596]: time="2025-12-16T03:28:01.176980207Z" level=info msg="Container 078ba7c6bfc0f18182f11ac966a9c5fb7a82eee368eac0439c80bdc5ac697c96: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:28:01.183637 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3812905628.mount: Deactivated successfully. Dec 16 03:28:01.191873 containerd[1596]: time="2025-12-16T03:28:01.191814628Z" level=info msg="CreateContainer within sandbox \"ae7bb018ffd3cfac35397c11fcf14bc0b52fc48a841cc23faeb1586b01e45e98\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"078ba7c6bfc0f18182f11ac966a9c5fb7a82eee368eac0439c80bdc5ac697c96\"" Dec 16 03:28:01.194734 containerd[1596]: time="2025-12-16T03:28:01.194686653Z" level=info msg="StartContainer for \"078ba7c6bfc0f18182f11ac966a9c5fb7a82eee368eac0439c80bdc5ac697c96\"" Dec 16 03:28:01.195814 containerd[1596]: time="2025-12-16T03:28:01.195768958Z" level=info msg="connecting to shim 078ba7c6bfc0f18182f11ac966a9c5fb7a82eee368eac0439c80bdc5ac697c96" address="unix:///run/containerd/s/e2598c305db1203b65fbbd8cb1968537a8aa8a985d918225b7a05a75c2499a3e" protocol=ttrpc version=3 Dec 16 03:28:01.231478 systemd[1]: Started cri-containerd-078ba7c6bfc0f18182f11ac966a9c5fb7a82eee368eac0439c80bdc5ac697c96.scope - libcontainer container 078ba7c6bfc0f18182f11ac966a9c5fb7a82eee368eac0439c80bdc5ac697c96. Dec 16 03:28:01.247000 audit: BPF prog-id=146 op=LOAD Dec 16 03:28:01.249384 kubelet[2782]: E1216 03:28:01.248062 2782 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:28:01.250000 audit: BPF prog-id=147 op=LOAD Dec 16 03:28:01.250000 audit[3093]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2842 pid=3093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:01.250000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037386261376336626663306631383138326631316163393636613963 Dec 16 03:28:01.250000 audit: BPF prog-id=147 op=UNLOAD Dec 16 03:28:01.250000 audit[3093]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2842 pid=3093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:01.250000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037386261376336626663306631383138326631316163393636613963 Dec 16 03:28:01.252000 audit: BPF prog-id=148 op=LOAD Dec 16 03:28:01.252000 audit[3093]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2842 pid=3093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:01.252000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037386261376336626663306631383138326631316163393636613963 Dec 16 03:28:01.254000 audit: BPF prog-id=149 op=LOAD Dec 16 03:28:01.254000 audit[3093]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2842 pid=3093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:01.254000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037386261376336626663306631383138326631316163393636613963 Dec 16 03:28:01.254000 audit: BPF prog-id=149 op=UNLOAD Dec 16 03:28:01.254000 audit[3093]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2842 pid=3093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:01.254000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037386261376336626663306631383138326631316163393636613963 Dec 16 03:28:01.254000 audit: BPF prog-id=148 op=UNLOAD Dec 16 03:28:01.254000 audit[3093]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2842 pid=3093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:01.254000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037386261376336626663306631383138326631316163393636613963 Dec 16 03:28:01.254000 audit: BPF prog-id=150 op=LOAD Dec 16 03:28:01.254000 audit[3093]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2842 pid=3093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:01.254000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037386261376336626663306631383138326631316163393636613963 Dec 16 03:28:01.287389 containerd[1596]: time="2025-12-16T03:28:01.287343400Z" level=info msg="StartContainer for \"078ba7c6bfc0f18182f11ac966a9c5fb7a82eee368eac0439c80bdc5ac697c96\" returns successfully" Dec 16 03:28:01.506755 kubelet[2782]: E1216 03:28:01.506297 2782 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:28:01.546692 kubelet[2782]: I1216 03:28:01.546617 2782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-jbsf6" podStartSLOduration=1.32509913 podStartE2EDuration="3.546596598s" podCreationTimestamp="2025-12-16 03:27:58 +0000 UTC" firstStartedPulling="2025-12-16 03:27:58.945263893 +0000 UTC m=+6.812476247" lastFinishedPulling="2025-12-16 03:28:01.166761374 +0000 UTC m=+9.033973715" observedRunningTime="2025-12-16 03:28:01.527827993 +0000 UTC m=+9.395040357" watchObservedRunningTime="2025-12-16 03:28:01.546596598 +0000 UTC m=+9.413808959" Dec 16 03:28:02.512954 kubelet[2782]: E1216 03:28:02.512432 2782 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:28:03.502155 kubelet[2782]: E1216 03:28:03.500285 2782 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:28:05.295835 update_engine[1577]: I20251216 03:28:05.295137 1577 update_attempter.cc:509] Updating boot flags... Dec 16 03:28:06.608053 sudo[1829]: pam_unix(sudo:session): session closed for user root Dec 16 03:28:06.607000 audit[1829]: USER_END pid=1829 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:28:06.612201 kernel: kauditd_printk_skb: 224 callbacks suppressed Dec 16 03:28:06.612267 kernel: audit: type=1106 audit(1765855686.607:518): pid=1829 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:28:06.617657 sshd[1828]: Connection closed by 147.75.109.163 port 36172 Dec 16 03:28:06.620480 kernel: audit: type=1104 audit(1765855686.607:519): pid=1829 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:28:06.607000 audit[1829]: CRED_DISP pid=1829 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:28:06.619450 sshd-session[1824]: pam_unix(sshd:session): session closed for user core Dec 16 03:28:06.624000 audit[1824]: USER_END pid=1824 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:06.630936 systemd[1]: sshd@6-144.126.212.19:22-147.75.109.163:36172.service: Deactivated successfully. Dec 16 03:28:06.633128 kernel: audit: type=1106 audit(1765855686.624:520): pid=1824 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:06.633306 kernel: audit: type=1104 audit(1765855686.625:521): pid=1824 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:06.625000 audit[1824]: CRED_DISP pid=1824 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:06.638319 systemd[1]: session-8.scope: Deactivated successfully. Dec 16 03:28:06.631000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-144.126.212.19:22-147.75.109.163:36172 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:06.639689 systemd[1]: session-8.scope: Consumed 5.679s CPU time, 167M memory peak. Dec 16 03:28:06.645131 kernel: audit: type=1131 audit(1765855686.631:522): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-144.126.212.19:22-147.75.109.163:36172 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:06.647524 systemd-logind[1576]: Session 8 logged out. Waiting for processes to exit. Dec 16 03:28:06.652438 systemd-logind[1576]: Removed session 8. Dec 16 03:28:07.577000 audit[3193]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3193 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:28:07.582183 kernel: audit: type=1325 audit(1765855687.577:523): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3193 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:28:07.577000 audit[3193]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffe8fe718b0 a2=0 a3=7ffe8fe7189c items=0 ppid=2935 pid=3193 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:07.589176 kernel: audit: type=1300 audit(1765855687.577:523): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffe8fe718b0 a2=0 a3=7ffe8fe7189c items=0 ppid=2935 pid=3193 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:07.577000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:28:07.596185 kernel: audit: type=1327 audit(1765855687.577:523): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:28:07.590000 audit[3193]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3193 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:28:07.600156 kernel: audit: type=1325 audit(1765855687.590:524): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3193 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:28:07.590000 audit[3193]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe8fe718b0 a2=0 a3=0 items=0 ppid=2935 pid=3193 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:07.609445 kernel: audit: type=1300 audit(1765855687.590:524): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe8fe718b0 a2=0 a3=0 items=0 ppid=2935 pid=3193 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:07.590000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:28:08.614000 audit[3195]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3195 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:28:08.614000 audit[3195]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffc531ef600 a2=0 a3=7ffc531ef5ec items=0 ppid=2935 pid=3195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:08.614000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:28:08.620000 audit[3195]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3195 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:28:08.620000 audit[3195]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc531ef600 a2=0 a3=0 items=0 ppid=2935 pid=3195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:08.620000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:28:10.940000 audit[3198]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3198 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:28:10.940000 audit[3198]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffe7edea650 a2=0 a3=7ffe7edea63c items=0 ppid=2935 pid=3198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:10.940000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:28:10.944000 audit[3198]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3198 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:28:10.944000 audit[3198]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe7edea650 a2=0 a3=0 items=0 ppid=2935 pid=3198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:10.944000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:28:12.019000 audit[3200]: NETFILTER_CFG table=filter:111 family=2 entries=19 op=nft_register_rule pid=3200 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:28:12.021331 kernel: kauditd_printk_skb: 13 callbacks suppressed Dec 16 03:28:12.021446 kernel: audit: type=1325 audit(1765855692.019:529): table=filter:111 family=2 entries=19 op=nft_register_rule pid=3200 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:28:12.019000 audit[3200]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd028a1880 a2=0 a3=7ffd028a186c items=0 ppid=2935 pid=3200 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:12.026030 kernel: audit: type=1300 audit(1765855692.019:529): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd028a1880 a2=0 a3=7ffd028a186c items=0 ppid=2935 pid=3200 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:12.019000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:28:12.029993 kernel: audit: type=1327 audit(1765855692.019:529): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:28:12.033000 audit[3200]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3200 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:28:12.033000 audit[3200]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd028a1880 a2=0 a3=0 items=0 ppid=2935 pid=3200 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:12.038970 kernel: audit: type=1325 audit(1765855692.033:530): table=nat:112 family=2 entries=12 op=nft_register_rule pid=3200 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:28:12.039103 kernel: audit: type=1300 audit(1765855692.033:530): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd028a1880 a2=0 a3=0 items=0 ppid=2935 pid=3200 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:12.033000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:28:12.042966 kernel: audit: type=1327 audit(1765855692.033:530): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:28:13.166000 audit[3203]: NETFILTER_CFG table=filter:113 family=2 entries=21 op=nft_register_rule pid=3203 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:28:13.172136 kernel: audit: type=1325 audit(1765855693.166:531): table=filter:113 family=2 entries=21 op=nft_register_rule pid=3203 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:28:13.166000 audit[3203]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffd3f1fb980 a2=0 a3=7ffd3f1fb96c items=0 ppid=2935 pid=3203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:13.179164 kernel: audit: type=1300 audit(1765855693.166:531): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffd3f1fb980 a2=0 a3=7ffd3f1fb96c items=0 ppid=2935 pid=3203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:13.166000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:28:13.190176 kernel: audit: type=1327 audit(1765855693.166:531): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:28:13.222028 systemd[1]: Created slice kubepods-besteffort-pod7d0afeb1_2a35_439a_acf5_b1f93cf328fa.slice - libcontainer container kubepods-besteffort-pod7d0afeb1_2a35_439a_acf5_b1f93cf328fa.slice. Dec 16 03:28:13.262000 audit[3203]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3203 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:28:13.267480 kernel: audit: type=1325 audit(1765855693.262:532): table=nat:114 family=2 entries=12 op=nft_register_rule pid=3203 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:28:13.262000 audit[3203]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd3f1fb980 a2=0 a3=0 items=0 ppid=2935 pid=3203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:13.262000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:28:13.291206 kubelet[2782]: I1216 03:28:13.291162 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7hqk\" (UniqueName: \"kubernetes.io/projected/7d0afeb1-2a35-439a-acf5-b1f93cf328fa-kube-api-access-r7hqk\") pod \"calico-typha-78dccc9df6-lhjf5\" (UID: \"7d0afeb1-2a35-439a-acf5-b1f93cf328fa\") " pod="calico-system/calico-typha-78dccc9df6-lhjf5" Dec 16 03:28:13.291206 kubelet[2782]: I1216 03:28:13.291205 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/7d0afeb1-2a35-439a-acf5-b1f93cf328fa-typha-certs\") pod \"calico-typha-78dccc9df6-lhjf5\" (UID: \"7d0afeb1-2a35-439a-acf5-b1f93cf328fa\") " pod="calico-system/calico-typha-78dccc9df6-lhjf5" Dec 16 03:28:13.291782 kubelet[2782]: I1216 03:28:13.291226 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7d0afeb1-2a35-439a-acf5-b1f93cf328fa-tigera-ca-bundle\") pod \"calico-typha-78dccc9df6-lhjf5\" (UID: \"7d0afeb1-2a35-439a-acf5-b1f93cf328fa\") " pod="calico-system/calico-typha-78dccc9df6-lhjf5" Dec 16 03:28:13.366761 systemd[1]: Created slice kubepods-besteffort-pod7e31ac7b_e1c1_4ae7_8a3c_301c92f50ae2.slice - libcontainer container kubepods-besteffort-pod7e31ac7b_e1c1_4ae7_8a3c_301c92f50ae2.slice. Dec 16 03:28:13.391828 kubelet[2782]: I1216 03:28:13.391463 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/7e31ac7b-e1c1-4ae7-8a3c-301c92f50ae2-cni-log-dir\") pod \"calico-node-4hms4\" (UID: \"7e31ac7b-e1c1-4ae7-8a3c-301c92f50ae2\") " pod="calico-system/calico-node-4hms4" Dec 16 03:28:13.391828 kubelet[2782]: I1216 03:28:13.391582 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vb9w\" (UniqueName: \"kubernetes.io/projected/7e31ac7b-e1c1-4ae7-8a3c-301c92f50ae2-kube-api-access-2vb9w\") pod \"calico-node-4hms4\" (UID: \"7e31ac7b-e1c1-4ae7-8a3c-301c92f50ae2\") " pod="calico-system/calico-node-4hms4" Dec 16 03:28:13.391828 kubelet[2782]: I1216 03:28:13.391604 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/7e31ac7b-e1c1-4ae7-8a3c-301c92f50ae2-cni-net-dir\") pod \"calico-node-4hms4\" (UID: \"7e31ac7b-e1c1-4ae7-8a3c-301c92f50ae2\") " pod="calico-system/calico-node-4hms4" Dec 16 03:28:13.391828 kubelet[2782]: I1216 03:28:13.391620 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e31ac7b-e1c1-4ae7-8a3c-301c92f50ae2-tigera-ca-bundle\") pod \"calico-node-4hms4\" (UID: \"7e31ac7b-e1c1-4ae7-8a3c-301c92f50ae2\") " pod="calico-system/calico-node-4hms4" Dec 16 03:28:13.391828 kubelet[2782]: I1216 03:28:13.391636 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7e31ac7b-e1c1-4ae7-8a3c-301c92f50ae2-lib-modules\") pod \"calico-node-4hms4\" (UID: \"7e31ac7b-e1c1-4ae7-8a3c-301c92f50ae2\") " pod="calico-system/calico-node-4hms4" Dec 16 03:28:13.392129 kubelet[2782]: I1216 03:28:13.391652 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/7e31ac7b-e1c1-4ae7-8a3c-301c92f50ae2-node-certs\") pod \"calico-node-4hms4\" (UID: \"7e31ac7b-e1c1-4ae7-8a3c-301c92f50ae2\") " pod="calico-system/calico-node-4hms4" Dec 16 03:28:13.393575 kubelet[2782]: I1216 03:28:13.393037 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7e31ac7b-e1c1-4ae7-8a3c-301c92f50ae2-xtables-lock\") pod \"calico-node-4hms4\" (UID: \"7e31ac7b-e1c1-4ae7-8a3c-301c92f50ae2\") " pod="calico-system/calico-node-4hms4" Dec 16 03:28:13.393575 kubelet[2782]: I1216 03:28:13.393121 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/7e31ac7b-e1c1-4ae7-8a3c-301c92f50ae2-var-run-calico\") pod \"calico-node-4hms4\" (UID: \"7e31ac7b-e1c1-4ae7-8a3c-301c92f50ae2\") " pod="calico-system/calico-node-4hms4" Dec 16 03:28:13.393575 kubelet[2782]: I1216 03:28:13.393180 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/7e31ac7b-e1c1-4ae7-8a3c-301c92f50ae2-flexvol-driver-host\") pod \"calico-node-4hms4\" (UID: \"7e31ac7b-e1c1-4ae7-8a3c-301c92f50ae2\") " pod="calico-system/calico-node-4hms4" Dec 16 03:28:13.393575 kubelet[2782]: I1216 03:28:13.393199 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/7e31ac7b-e1c1-4ae7-8a3c-301c92f50ae2-policysync\") pod \"calico-node-4hms4\" (UID: \"7e31ac7b-e1c1-4ae7-8a3c-301c92f50ae2\") " pod="calico-system/calico-node-4hms4" Dec 16 03:28:13.393575 kubelet[2782]: I1216 03:28:13.393212 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7e31ac7b-e1c1-4ae7-8a3c-301c92f50ae2-var-lib-calico\") pod \"calico-node-4hms4\" (UID: \"7e31ac7b-e1c1-4ae7-8a3c-301c92f50ae2\") " pod="calico-system/calico-node-4hms4" Dec 16 03:28:13.393873 kubelet[2782]: I1216 03:28:13.393242 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/7e31ac7b-e1c1-4ae7-8a3c-301c92f50ae2-cni-bin-dir\") pod \"calico-node-4hms4\" (UID: \"7e31ac7b-e1c1-4ae7-8a3c-301c92f50ae2\") " pod="calico-system/calico-node-4hms4" Dec 16 03:28:13.503938 kubelet[2782]: E1216 03:28:13.502789 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:13.504895 kubelet[2782]: W1216 03:28:13.504129 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:13.504895 kubelet[2782]: E1216 03:28:13.504177 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:13.526703 kubelet[2782]: E1216 03:28:13.526654 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:13.526703 kubelet[2782]: W1216 03:28:13.526681 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:13.526703 kubelet[2782]: E1216 03:28:13.526705 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:13.528283 kubelet[2782]: E1216 03:28:13.528252 2782 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:28:13.530899 containerd[1596]: time="2025-12-16T03:28:13.530856937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-78dccc9df6-lhjf5,Uid:7d0afeb1-2a35-439a-acf5-b1f93cf328fa,Namespace:calico-system,Attempt:0,}" Dec 16 03:28:13.556933 kubelet[2782]: E1216 03:28:13.556759 2782 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hrg2x" podUID="8744312e-a06c-4ec6-97fa-99683d819e93" Dec 16 03:28:13.571047 kubelet[2782]: E1216 03:28:13.570998 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:13.571496 kubelet[2782]: W1216 03:28:13.571413 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:13.571496 kubelet[2782]: E1216 03:28:13.571455 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:13.573897 kubelet[2782]: E1216 03:28:13.573861 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:13.573897 kubelet[2782]: W1216 03:28:13.573886 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:13.575361 kubelet[2782]: E1216 03:28:13.575295 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:13.575464 containerd[1596]: time="2025-12-16T03:28:13.574176020Z" level=info msg="connecting to shim 861bf4d45d322059eaee39e60783f6e9868c004f1414e0fc33549eed27fff211" address="unix:///run/containerd/s/536d978b603a4bb806bde45485898b5677dc33a435ec2a269ea5fb1bac73b695" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:28:13.577126 kubelet[2782]: E1216 03:28:13.575802 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:13.577126 kubelet[2782]: W1216 03:28:13.575820 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:13.577126 kubelet[2782]: E1216 03:28:13.575837 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:13.577454 kubelet[2782]: E1216 03:28:13.577168 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:13.577454 kubelet[2782]: W1216 03:28:13.577183 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:13.577454 kubelet[2782]: E1216 03:28:13.577201 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:13.578183 kubelet[2782]: E1216 03:28:13.578132 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:13.578183 kubelet[2782]: W1216 03:28:13.578152 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:13.578183 kubelet[2782]: E1216 03:28:13.578169 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:13.578847 kubelet[2782]: E1216 03:28:13.578818 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:13.579155 kubelet[2782]: W1216 03:28:13.579112 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:13.579155 kubelet[2782]: E1216 03:28:13.579141 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:13.580044 kubelet[2782]: E1216 03:28:13.579724 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:13.580044 kubelet[2782]: W1216 03:28:13.579741 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:13.580044 kubelet[2782]: E1216 03:28:13.579758 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:13.580844 kubelet[2782]: E1216 03:28:13.580724 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:13.580844 kubelet[2782]: W1216 03:28:13.580741 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:13.580844 kubelet[2782]: E1216 03:28:13.580754 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:13.582193 kubelet[2782]: E1216 03:28:13.581696 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:13.582193 kubelet[2782]: W1216 03:28:13.581713 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:13.582193 kubelet[2782]: E1216 03:28:13.581726 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:13.582781 kubelet[2782]: E1216 03:28:13.582753 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:13.582781 kubelet[2782]: W1216 03:28:13.582770 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:13.582781 kubelet[2782]: E1216 03:28:13.582782 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:13.583042 kubelet[2782]: E1216 03:28:13.583027 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:13.583042 kubelet[2782]: W1216 03:28:13.583039 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:13.583195 kubelet[2782]: E1216 03:28:13.583124 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:13.583790 kubelet[2782]: E1216 03:28:13.583772 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:13.583790 kubelet[2782]: W1216 03:28:13.583787 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:13.583889 kubelet[2782]: E1216 03:28:13.583799 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:13.584809 kubelet[2782]: E1216 03:28:13.584785 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:13.584809 kubelet[2782]: W1216 03:28:13.584802 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:13.586150 kubelet[2782]: E1216 03:28:13.584814 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:13.587301 kubelet[2782]: E1216 03:28:13.587262 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:13.587301 kubelet[2782]: W1216 03:28:13.587283 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:13.587301 kubelet[2782]: E1216 03:28:13.587301 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:13.588067 kubelet[2782]: E1216 03:28:13.588049 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:13.588067 kubelet[2782]: W1216 03:28:13.588066 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:13.588164 kubelet[2782]: E1216 03:28:13.588080 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:13.588526 kubelet[2782]: E1216 03:28:13.588509 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:13.588579 kubelet[2782]: W1216 03:28:13.588526 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:13.588579 kubelet[2782]: E1216 03:28:13.588543 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:13.590325 kubelet[2782]: E1216 03:28:13.590293 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:13.590325 kubelet[2782]: W1216 03:28:13.590311 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:13.590325 kubelet[2782]: E1216 03:28:13.590325 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:13.591211 kubelet[2782]: E1216 03:28:13.591191 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:13.591254 kubelet[2782]: W1216 03:28:13.591212 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:13.591254 kubelet[2782]: E1216 03:28:13.591228 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:13.593917 kubelet[2782]: E1216 03:28:13.593889 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:13.593917 kubelet[2782]: W1216 03:28:13.593912 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:13.594029 kubelet[2782]: E1216 03:28:13.593929 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:13.595165 kubelet[2782]: E1216 03:28:13.595140 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:13.595165 kubelet[2782]: W1216 03:28:13.595160 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:13.595269 kubelet[2782]: E1216 03:28:13.595176 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:13.596523 kubelet[2782]: E1216 03:28:13.596410 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:13.596523 kubelet[2782]: W1216 03:28:13.596428 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:13.596523 kubelet[2782]: E1216 03:28:13.596442 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:13.596523 kubelet[2782]: I1216 03:28:13.596477 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8744312e-a06c-4ec6-97fa-99683d819e93-socket-dir\") pod \"csi-node-driver-hrg2x\" (UID: \"8744312e-a06c-4ec6-97fa-99683d819e93\") " pod="calico-system/csi-node-driver-hrg2x" Dec 16 03:28:13.598116 kubelet[2782]: E1216 03:28:13.597923 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:13.598116 kubelet[2782]: W1216 03:28:13.597945 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:13.598116 kubelet[2782]: E1216 03:28:13.597960 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:13.598116 kubelet[2782]: I1216 03:28:13.597984 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8744312e-a06c-4ec6-97fa-99683d819e93-registration-dir\") pod \"csi-node-driver-hrg2x\" (UID: \"8744312e-a06c-4ec6-97fa-99683d819e93\") " pod="calico-system/csi-node-driver-hrg2x" Dec 16 03:28:13.599237 kubelet[2782]: E1216 03:28:13.599212 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:13.599237 kubelet[2782]: W1216 03:28:13.599231 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:13.599365 kubelet[2782]: E1216 03:28:13.599244 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:13.599365 kubelet[2782]: I1216 03:28:13.599265 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbc9h\" (UniqueName: \"kubernetes.io/projected/8744312e-a06c-4ec6-97fa-99683d819e93-kube-api-access-sbc9h\") pod \"csi-node-driver-hrg2x\" (UID: \"8744312e-a06c-4ec6-97fa-99683d819e93\") " pod="calico-system/csi-node-driver-hrg2x" Dec 16 03:28:13.600997 kubelet[2782]: E1216 03:28:13.600669 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:13.600997 kubelet[2782]: W1216 03:28:13.600688 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:13.600997 kubelet[2782]: E1216 03:28:13.600702 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:13.602179 kubelet[2782]: I1216 03:28:13.602144 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8744312e-a06c-4ec6-97fa-99683d819e93-kubelet-dir\") pod \"csi-node-driver-hrg2x\" (UID: \"8744312e-a06c-4ec6-97fa-99683d819e93\") " pod="calico-system/csi-node-driver-hrg2x" Dec 16 03:28:13.602382 kubelet[2782]: E1216 03:28:13.602367 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:13.602382 kubelet[2782]: W1216 03:28:13.602381 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:13.602456 kubelet[2782]: E1216 03:28:13.602393 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:13.602634 kubelet[2782]: E1216 03:28:13.602619 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:13.602634 kubelet[2782]: W1216 03:28:13.602631 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:13.602706 kubelet[2782]: E1216 03:28:13.602641 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:13.602905 kubelet[2782]: E1216 03:28:13.602837 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:13.602905 kubelet[2782]: W1216 03:28:13.602848 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:13.602905 kubelet[2782]: E1216 03:28:13.602856 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:13.603108 kubelet[2782]: E1216 03:28:13.603016 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:13.603108 kubelet[2782]: W1216 03:28:13.603036 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:13.603108 kubelet[2782]: E1216 03:28:13.603044 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:13.603108 kubelet[2782]: I1216 03:28:13.603068 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/8744312e-a06c-4ec6-97fa-99683d819e93-varrun\") pod \"csi-node-driver-hrg2x\" (UID: \"8744312e-a06c-4ec6-97fa-99683d819e93\") " pod="calico-system/csi-node-driver-hrg2x" Dec 16 03:28:13.603459 kubelet[2782]: E1216 03:28:13.603385 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:13.603459 kubelet[2782]: W1216 03:28:13.603400 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:13.603459 kubelet[2782]: E1216 03:28:13.603411 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:13.604531 kubelet[2782]: E1216 03:28:13.604499 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:13.604531 kubelet[2782]: W1216 03:28:13.604515 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:13.604531 kubelet[2782]: E1216 03:28:13.604526 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:13.605589 kubelet[2782]: E1216 03:28:13.605553 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:13.605589 kubelet[2782]: W1216 03:28:13.605568 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:13.605589 kubelet[2782]: E1216 03:28:13.605579 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:13.606325 kubelet[2782]: E1216 03:28:13.606256 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:13.606325 kubelet[2782]: W1216 03:28:13.606271 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:13.606325 kubelet[2782]: E1216 03:28:13.606282 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:13.607264 kubelet[2782]: E1216 03:28:13.607181 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:13.607264 kubelet[2782]: W1216 03:28:13.607195 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:13.607264 kubelet[2782]: E1216 03:28:13.607206 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:13.608340 kubelet[2782]: E1216 03:28:13.608228 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:13.608340 kubelet[2782]: W1216 03:28:13.608243 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:13.608340 kubelet[2782]: E1216 03:28:13.608255 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:13.608466 kubelet[2782]: E1216 03:28:13.608453 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:13.608466 kubelet[2782]: W1216 03:28:13.608465 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:13.608515 kubelet[2782]: E1216 03:28:13.608474 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:13.629294 systemd[1]: Started cri-containerd-861bf4d45d322059eaee39e60783f6e9868c004f1414e0fc33549eed27fff211.scope - libcontainer container 861bf4d45d322059eaee39e60783f6e9868c004f1414e0fc33549eed27fff211. Dec 16 03:28:13.655000 audit: BPF prog-id=151 op=LOAD Dec 16 03:28:13.656000 audit: BPF prog-id=152 op=LOAD Dec 16 03:28:13.656000 audit[3256]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3224 pid=3256 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:13.656000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836316266346434356433323230353965616565333965363037383366 Dec 16 03:28:13.656000 audit: BPF prog-id=152 op=UNLOAD Dec 16 03:28:13.656000 audit[3256]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3224 pid=3256 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:13.656000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836316266346434356433323230353965616565333965363037383366 Dec 16 03:28:13.657000 audit: BPF prog-id=153 op=LOAD Dec 16 03:28:13.657000 audit[3256]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3224 pid=3256 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:13.657000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836316266346434356433323230353965616565333965363037383366 Dec 16 03:28:13.657000 audit: BPF prog-id=154 op=LOAD Dec 16 03:28:13.657000 audit[3256]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3224 pid=3256 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:13.657000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836316266346434356433323230353965616565333965363037383366 Dec 16 03:28:13.657000 audit: BPF prog-id=154 op=UNLOAD Dec 16 03:28:13.657000 audit[3256]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3224 pid=3256 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:13.657000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836316266346434356433323230353965616565333965363037383366 Dec 16 03:28:13.657000 audit: BPF prog-id=153 op=UNLOAD Dec 16 03:28:13.657000 audit[3256]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3224 pid=3256 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:13.657000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836316266346434356433323230353965616565333965363037383366 Dec 16 03:28:13.657000 audit: BPF prog-id=155 op=LOAD Dec 16 03:28:13.657000 audit[3256]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3224 pid=3256 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:13.657000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836316266346434356433323230353965616565333965363037383366 Dec 16 03:28:13.674532 kubelet[2782]: E1216 03:28:13.674490 2782 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:28:13.676549 containerd[1596]: time="2025-12-16T03:28:13.676508242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4hms4,Uid:7e31ac7b-e1c1-4ae7-8a3c-301c92f50ae2,Namespace:calico-system,Attempt:0,}" Dec 16 03:28:13.706588 containerd[1596]: time="2025-12-16T03:28:13.706514312Z" level=info msg="connecting to shim 61abb08980ebe5e486176a4d4989d22967859019bc8cdee89f3d1f5e743be48c" address="unix:///run/containerd/s/d8356615bef8c78ceb72ba915aba9f6f685e1d356682a76c1e63dc0f2371c5e3" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:28:13.708131 kubelet[2782]: E1216 03:28:13.708083 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:13.708131 kubelet[2782]: W1216 03:28:13.708124 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:13.708340 kubelet[2782]: E1216 03:28:13.708158 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:13.709056 kubelet[2782]: E1216 03:28:13.709032 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:13.709056 kubelet[2782]: W1216 03:28:13.709052 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:13.709191 kubelet[2782]: E1216 03:28:13.709067 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:13.710402 kubelet[2782]: E1216 03:28:13.710382 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:13.710402 kubelet[2782]: W1216 03:28:13.710397 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:13.710516 kubelet[2782]: E1216 03:28:13.710423 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:13.710757 kubelet[2782]: E1216 03:28:13.710737 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:13.710815 kubelet[2782]: W1216 03:28:13.710758 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:13.710815 kubelet[2782]: E1216 03:28:13.710775 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:13.711131 kubelet[2782]: E1216 03:28:13.711113 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:13.711131 kubelet[2782]: W1216 03:28:13.711130 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:13.711208 kubelet[2782]: E1216 03:28:13.711141 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:13.711508 kubelet[2782]: E1216 03:28:13.711460 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:13.711508 kubelet[2782]: W1216 03:28:13.711474 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:13.711508 kubelet[2782]: E1216 03:28:13.711485 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:13.712012 kubelet[2782]: E1216 03:28:13.711995 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:13.712012 kubelet[2782]: W1216 03:28:13.712007 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:13.712098 kubelet[2782]: E1216 03:28:13.712018 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:13.712240 kubelet[2782]: E1216 03:28:13.712212 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:13.712240 kubelet[2782]: W1216 03:28:13.712222 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:13.712240 kubelet[2782]: E1216 03:28:13.712231 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:13.712734 kubelet[2782]: E1216 03:28:13.712716 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:13.712734 kubelet[2782]: W1216 03:28:13.712729 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:13.712806 kubelet[2782]: E1216 03:28:13.712753 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:13.713062 kubelet[2782]: E1216 03:28:13.713044 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:13.713543 kubelet[2782]: W1216 03:28:13.713515 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:13.713543 kubelet[2782]: E1216 03:28:13.713538 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:13.714291 kubelet[2782]: E1216 03:28:13.714271 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:13.714291 kubelet[2782]: W1216 03:28:13.714285 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:13.714379 kubelet[2782]: E1216 03:28:13.714296 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:13.715169 kubelet[2782]: E1216 03:28:13.715145 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:13.715169 kubelet[2782]: W1216 03:28:13.715158 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:13.715169 kubelet[2782]: E1216 03:28:13.715169 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:13.715446 kubelet[2782]: E1216 03:28:13.715426 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:13.715446 kubelet[2782]: W1216 03:28:13.715438 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:13.715446 kubelet[2782]: E1216 03:28:13.715448 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:13.715841 kubelet[2782]: E1216 03:28:13.715825 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:13.715841 kubelet[2782]: W1216 03:28:13.715840 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:13.715921 kubelet[2782]: E1216 03:28:13.715851 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:13.716439 kubelet[2782]: E1216 03:28:13.716412 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:13.716439 kubelet[2782]: W1216 03:28:13.716431 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:13.716439 kubelet[2782]: E1216 03:28:13.716441 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:13.716942 kubelet[2782]: E1216 03:28:13.716925 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:13.716942 kubelet[2782]: W1216 03:28:13.716937 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:13.717031 kubelet[2782]: E1216 03:28:13.716948 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:13.717450 kubelet[2782]: E1216 03:28:13.717432 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:13.717450 kubelet[2782]: W1216 03:28:13.717445 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:13.717520 kubelet[2782]: E1216 03:28:13.717456 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:13.718098 kubelet[2782]: E1216 03:28:13.718069 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:13.718970 kubelet[2782]: W1216 03:28:13.718084 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:13.718970 kubelet[2782]: E1216 03:28:13.718201 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:13.718970 kubelet[2782]: E1216 03:28:13.718531 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:13.718970 kubelet[2782]: W1216 03:28:13.718540 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:13.718970 kubelet[2782]: E1216 03:28:13.718550 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:13.719162 kubelet[2782]: E1216 03:28:13.719013 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:13.719162 kubelet[2782]: W1216 03:28:13.719029 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:13.719162 kubelet[2782]: E1216 03:28:13.719040 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:13.720310 kubelet[2782]: E1216 03:28:13.720279 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:13.720310 kubelet[2782]: W1216 03:28:13.720298 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:13.720411 kubelet[2782]: E1216 03:28:13.720313 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:13.721058 kubelet[2782]: E1216 03:28:13.721037 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:13.721058 kubelet[2782]: W1216 03:28:13.721054 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:13.721176 kubelet[2782]: E1216 03:28:13.721068 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:13.722967 kubelet[2782]: E1216 03:28:13.722944 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:13.722967 kubelet[2782]: W1216 03:28:13.722961 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:13.723051 kubelet[2782]: E1216 03:28:13.722976 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:13.724377 kubelet[2782]: E1216 03:28:13.724187 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:13.724377 kubelet[2782]: W1216 03:28:13.724203 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:13.724377 kubelet[2782]: E1216 03:28:13.724215 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:13.724505 kubelet[2782]: E1216 03:28:13.724435 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:13.724505 kubelet[2782]: W1216 03:28:13.724444 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:13.724505 kubelet[2782]: E1216 03:28:13.724455 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:13.748194 kubelet[2782]: E1216 03:28:13.748153 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:13.748194 kubelet[2782]: W1216 03:28:13.748178 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:13.748194 kubelet[2782]: E1216 03:28:13.748201 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:13.766458 systemd[1]: Started cri-containerd-61abb08980ebe5e486176a4d4989d22967859019bc8cdee89f3d1f5e743be48c.scope - libcontainer container 61abb08980ebe5e486176a4d4989d22967859019bc8cdee89f3d1f5e743be48c. Dec 16 03:28:13.820000 audit: BPF prog-id=156 op=LOAD Dec 16 03:28:13.821000 audit: BPF prog-id=157 op=LOAD Dec 16 03:28:13.821000 audit[3341]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=3304 pid=3341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:13.821000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631616262303839383065626535653438363137366134643439383964 Dec 16 03:28:13.821000 audit: BPF prog-id=157 op=UNLOAD Dec 16 03:28:13.821000 audit[3341]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3304 pid=3341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:13.821000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631616262303839383065626535653438363137366134643439383964 Dec 16 03:28:13.821000 audit: BPF prog-id=158 op=LOAD Dec 16 03:28:13.821000 audit[3341]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3304 pid=3341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:13.821000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631616262303839383065626535653438363137366134643439383964 Dec 16 03:28:13.821000 audit: BPF prog-id=159 op=LOAD Dec 16 03:28:13.821000 audit[3341]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3304 pid=3341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:13.821000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631616262303839383065626535653438363137366134643439383964 Dec 16 03:28:13.821000 audit: BPF prog-id=159 op=UNLOAD Dec 16 03:28:13.821000 audit[3341]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3304 pid=3341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:13.821000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631616262303839383065626535653438363137366134643439383964 Dec 16 03:28:13.821000 audit: BPF prog-id=158 op=UNLOAD Dec 16 03:28:13.821000 audit[3341]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3304 pid=3341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:13.821000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631616262303839383065626535653438363137366134643439383964 Dec 16 03:28:13.821000 audit: BPF prog-id=160 op=LOAD Dec 16 03:28:13.821000 audit[3341]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3304 pid=3341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:13.821000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631616262303839383065626535653438363137366134643439383964 Dec 16 03:28:13.851529 containerd[1596]: time="2025-12-16T03:28:13.851378801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-78dccc9df6-lhjf5,Uid:7d0afeb1-2a35-439a-acf5-b1f93cf328fa,Namespace:calico-system,Attempt:0,} returns sandbox id \"861bf4d45d322059eaee39e60783f6e9868c004f1414e0fc33549eed27fff211\"" Dec 16 03:28:13.854109 kubelet[2782]: E1216 03:28:13.853356 2782 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:28:13.856724 containerd[1596]: time="2025-12-16T03:28:13.856672588Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 16 03:28:13.864800 containerd[1596]: time="2025-12-16T03:28:13.864734541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4hms4,Uid:7e31ac7b-e1c1-4ae7-8a3c-301c92f50ae2,Namespace:calico-system,Attempt:0,} returns sandbox id \"61abb08980ebe5e486176a4d4989d22967859019bc8cdee89f3d1f5e743be48c\"" Dec 16 03:28:13.871372 kubelet[2782]: E1216 03:28:13.871273 2782 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:28:14.279000 audit[3376]: NETFILTER_CFG table=filter:115 family=2 entries=22 op=nft_register_rule pid=3376 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:28:14.279000 audit[3376]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fff52247cd0 a2=0 a3=7fff52247cbc items=0 ppid=2935 pid=3376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:14.279000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:28:14.286000 audit[3376]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3376 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:28:14.286000 audit[3376]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff52247cd0 a2=0 a3=0 items=0 ppid=2935 pid=3376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:14.286000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:28:15.258731 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2902329193.mount: Deactivated successfully. Dec 16 03:28:15.402317 kubelet[2782]: E1216 03:28:15.401754 2782 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hrg2x" podUID="8744312e-a06c-4ec6-97fa-99683d819e93" Dec 16 03:28:16.832157 containerd[1596]: time="2025-12-16T03:28:16.831529976Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:28:16.833028 containerd[1596]: time="2025-12-16T03:28:16.832987060Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Dec 16 03:28:16.834177 containerd[1596]: time="2025-12-16T03:28:16.833536582Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:28:16.867181 containerd[1596]: time="2025-12-16T03:28:16.866653531Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:28:16.867590 containerd[1596]: time="2025-12-16T03:28:16.867441332Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 3.0107153s" Dec 16 03:28:16.867590 containerd[1596]: time="2025-12-16T03:28:16.867479891Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Dec 16 03:28:16.869054 containerd[1596]: time="2025-12-16T03:28:16.868966138Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 16 03:28:16.898963 containerd[1596]: time="2025-12-16T03:28:16.898911959Z" level=info msg="CreateContainer within sandbox \"861bf4d45d322059eaee39e60783f6e9868c004f1414e0fc33549eed27fff211\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 16 03:28:16.906521 containerd[1596]: time="2025-12-16T03:28:16.906446066Z" level=info msg="Container ad4789f0714029477e3cab9e78b612c4b7dc81c093132d6d9034d53b68f72eb4: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:28:16.914077 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1973980344.mount: Deactivated successfully. Dec 16 03:28:16.924611 containerd[1596]: time="2025-12-16T03:28:16.924232217Z" level=info msg="CreateContainer within sandbox \"861bf4d45d322059eaee39e60783f6e9868c004f1414e0fc33549eed27fff211\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"ad4789f0714029477e3cab9e78b612c4b7dc81c093132d6d9034d53b68f72eb4\"" Dec 16 03:28:16.926114 containerd[1596]: time="2025-12-16T03:28:16.924958681Z" level=info msg="StartContainer for \"ad4789f0714029477e3cab9e78b612c4b7dc81c093132d6d9034d53b68f72eb4\"" Dec 16 03:28:16.948354 containerd[1596]: time="2025-12-16T03:28:16.948296379Z" level=info msg="connecting to shim ad4789f0714029477e3cab9e78b612c4b7dc81c093132d6d9034d53b68f72eb4" address="unix:///run/containerd/s/536d978b603a4bb806bde45485898b5677dc33a435ec2a269ea5fb1bac73b695" protocol=ttrpc version=3 Dec 16 03:28:16.973419 systemd[1]: Started cri-containerd-ad4789f0714029477e3cab9e78b612c4b7dc81c093132d6d9034d53b68f72eb4.scope - libcontainer container ad4789f0714029477e3cab9e78b612c4b7dc81c093132d6d9034d53b68f72eb4. Dec 16 03:28:16.993000 audit: BPF prog-id=161 op=LOAD Dec 16 03:28:16.994000 audit: BPF prog-id=162 op=LOAD Dec 16 03:28:16.994000 audit[3388]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3224 pid=3388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:16.994000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6164343738396630373134303239343737653363616239653738623631 Dec 16 03:28:16.994000 audit: BPF prog-id=162 op=UNLOAD Dec 16 03:28:16.994000 audit[3388]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3224 pid=3388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:16.994000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6164343738396630373134303239343737653363616239653738623631 Dec 16 03:28:16.994000 audit: BPF prog-id=163 op=LOAD Dec 16 03:28:16.994000 audit[3388]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3224 pid=3388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:16.994000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6164343738396630373134303239343737653363616239653738623631 Dec 16 03:28:16.994000 audit: BPF prog-id=164 op=LOAD Dec 16 03:28:16.994000 audit[3388]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3224 pid=3388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:16.994000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6164343738396630373134303239343737653363616239653738623631 Dec 16 03:28:16.995000 audit: BPF prog-id=164 op=UNLOAD Dec 16 03:28:16.995000 audit[3388]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3224 pid=3388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:16.995000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6164343738396630373134303239343737653363616239653738623631 Dec 16 03:28:16.995000 audit: BPF prog-id=163 op=UNLOAD Dec 16 03:28:16.995000 audit[3388]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3224 pid=3388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:16.995000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6164343738396630373134303239343737653363616239653738623631 Dec 16 03:28:16.995000 audit: BPF prog-id=165 op=LOAD Dec 16 03:28:16.995000 audit[3388]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3224 pid=3388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:16.995000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6164343738396630373134303239343737653363616239653738623631 Dec 16 03:28:17.050781 containerd[1596]: time="2025-12-16T03:28:17.050741185Z" level=info msg="StartContainer for \"ad4789f0714029477e3cab9e78b612c4b7dc81c093132d6d9034d53b68f72eb4\" returns successfully" Dec 16 03:28:17.400585 kubelet[2782]: E1216 03:28:17.400489 2782 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hrg2x" podUID="8744312e-a06c-4ec6-97fa-99683d819e93" Dec 16 03:28:17.584880 kubelet[2782]: E1216 03:28:17.584829 2782 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:28:17.612625 kubelet[2782]: I1216 03:28:17.610634 2782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-78dccc9df6-lhjf5" podStartSLOduration=1.597294648 podStartE2EDuration="4.610613833s" podCreationTimestamp="2025-12-16 03:28:13 +0000 UTC" firstStartedPulling="2025-12-16 03:28:13.855402654 +0000 UTC m=+21.722614994" lastFinishedPulling="2025-12-16 03:28:16.868721824 +0000 UTC m=+24.735934179" observedRunningTime="2025-12-16 03:28:17.610448551 +0000 UTC m=+25.477660916" watchObservedRunningTime="2025-12-16 03:28:17.610613833 +0000 UTC m=+25.477826195" Dec 16 03:28:17.619765 kubelet[2782]: E1216 03:28:17.619719 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:17.619765 kubelet[2782]: W1216 03:28:17.619743 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:17.619765 kubelet[2782]: E1216 03:28:17.619767 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:17.622242 kubelet[2782]: E1216 03:28:17.619919 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:17.622242 kubelet[2782]: W1216 03:28:17.619925 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:17.622242 kubelet[2782]: E1216 03:28:17.619933 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:17.622242 kubelet[2782]: E1216 03:28:17.620063 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:17.622242 kubelet[2782]: W1216 03:28:17.620070 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:17.622242 kubelet[2782]: E1216 03:28:17.620078 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:17.622242 kubelet[2782]: E1216 03:28:17.620308 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:17.622242 kubelet[2782]: W1216 03:28:17.620315 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:17.622242 kubelet[2782]: E1216 03:28:17.620324 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:17.622242 kubelet[2782]: E1216 03:28:17.620460 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:17.622536 kubelet[2782]: W1216 03:28:17.620466 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:17.622536 kubelet[2782]: E1216 03:28:17.620474 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:17.622536 kubelet[2782]: E1216 03:28:17.620589 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:17.622536 kubelet[2782]: W1216 03:28:17.620595 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:17.622536 kubelet[2782]: E1216 03:28:17.620603 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:17.622536 kubelet[2782]: E1216 03:28:17.620718 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:17.622536 kubelet[2782]: W1216 03:28:17.620724 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:17.622536 kubelet[2782]: E1216 03:28:17.620731 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:17.622536 kubelet[2782]: E1216 03:28:17.620847 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:17.622536 kubelet[2782]: W1216 03:28:17.620853 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:17.622875 kubelet[2782]: E1216 03:28:17.620859 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:17.622875 kubelet[2782]: E1216 03:28:17.620990 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:17.622875 kubelet[2782]: W1216 03:28:17.620996 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:17.622875 kubelet[2782]: E1216 03:28:17.621004 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:17.622875 kubelet[2782]: E1216 03:28:17.621153 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:17.622875 kubelet[2782]: W1216 03:28:17.621161 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:17.622875 kubelet[2782]: E1216 03:28:17.621169 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:17.622875 kubelet[2782]: E1216 03:28:17.621287 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:17.622875 kubelet[2782]: W1216 03:28:17.621294 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:17.622875 kubelet[2782]: E1216 03:28:17.621301 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:17.623154 kubelet[2782]: E1216 03:28:17.621420 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:17.623154 kubelet[2782]: W1216 03:28:17.621426 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:17.623154 kubelet[2782]: E1216 03:28:17.621432 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:17.623154 kubelet[2782]: E1216 03:28:17.621563 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:17.623154 kubelet[2782]: W1216 03:28:17.621569 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:17.623154 kubelet[2782]: E1216 03:28:17.621576 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:17.623154 kubelet[2782]: E1216 03:28:17.621720 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:17.623154 kubelet[2782]: W1216 03:28:17.621731 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:17.623154 kubelet[2782]: E1216 03:28:17.621742 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:17.623154 kubelet[2782]: E1216 03:28:17.621904 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:17.623534 kubelet[2782]: W1216 03:28:17.621911 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:17.623534 kubelet[2782]: E1216 03:28:17.621920 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:17.646450 kubelet[2782]: E1216 03:28:17.646283 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:17.646450 kubelet[2782]: W1216 03:28:17.646306 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:17.646450 kubelet[2782]: E1216 03:28:17.646328 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:17.647082 kubelet[2782]: E1216 03:28:17.647014 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:17.647082 kubelet[2782]: W1216 03:28:17.647026 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:17.647082 kubelet[2782]: E1216 03:28:17.647038 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:17.648426 kubelet[2782]: E1216 03:28:17.648391 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:17.648426 kubelet[2782]: W1216 03:28:17.648412 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:17.648426 kubelet[2782]: E1216 03:28:17.648430 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:17.648773 kubelet[2782]: E1216 03:28:17.648650 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:17.648773 kubelet[2782]: W1216 03:28:17.648659 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:17.648773 kubelet[2782]: E1216 03:28:17.648670 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:17.648896 kubelet[2782]: E1216 03:28:17.648848 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:17.648896 kubelet[2782]: W1216 03:28:17.648855 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:17.648896 kubelet[2782]: E1216 03:28:17.648864 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:17.649074 kubelet[2782]: E1216 03:28:17.649061 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:17.649074 kubelet[2782]: W1216 03:28:17.649071 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:17.649255 kubelet[2782]: E1216 03:28:17.649080 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:17.649755 kubelet[2782]: E1216 03:28:17.649464 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:17.649755 kubelet[2782]: W1216 03:28:17.649485 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:17.649755 kubelet[2782]: E1216 03:28:17.649500 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:17.652354 kubelet[2782]: E1216 03:28:17.651240 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:17.652729 kubelet[2782]: W1216 03:28:17.652584 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:17.652729 kubelet[2782]: E1216 03:28:17.652632 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:17.654046 kubelet[2782]: E1216 03:28:17.653509 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:17.654046 kubelet[2782]: W1216 03:28:17.653624 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:17.654046 kubelet[2782]: E1216 03:28:17.653653 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:17.656532 kubelet[2782]: E1216 03:28:17.656444 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:17.656921 kubelet[2782]: W1216 03:28:17.656630 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:17.657149 kubelet[2782]: E1216 03:28:17.656663 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:17.658212 kubelet[2782]: E1216 03:28:17.658136 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:17.658212 kubelet[2782]: W1216 03:28:17.658161 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:17.658212 kubelet[2782]: E1216 03:28:17.658190 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:17.659626 kubelet[2782]: E1216 03:28:17.659425 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:17.659991 kubelet[2782]: W1216 03:28:17.659752 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:17.659991 kubelet[2782]: E1216 03:28:17.659784 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:17.663123 kubelet[2782]: E1216 03:28:17.662291 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:17.663776 kubelet[2782]: W1216 03:28:17.663290 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:17.663776 kubelet[2782]: E1216 03:28:17.663348 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:17.664660 kubelet[2782]: E1216 03:28:17.664220 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:17.664924 kubelet[2782]: W1216 03:28:17.664246 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:17.664924 kubelet[2782]: E1216 03:28:17.664817 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:17.666571 kubelet[2782]: E1216 03:28:17.666066 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:17.666571 kubelet[2782]: W1216 03:28:17.666455 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:17.666571 kubelet[2782]: E1216 03:28:17.666484 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:17.668113 kubelet[2782]: E1216 03:28:17.668019 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:17.668516 kubelet[2782]: W1216 03:28:17.668323 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:17.668516 kubelet[2782]: E1216 03:28:17.668363 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:17.669539 kubelet[2782]: E1216 03:28:17.669451 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:17.670372 kubelet[2782]: W1216 03:28:17.669766 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:17.670372 kubelet[2782]: E1216 03:28:17.669797 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:17.672930 kubelet[2782]: E1216 03:28:17.672899 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:17.673242 kubelet[2782]: W1216 03:28:17.673146 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:17.673242 kubelet[2782]: E1216 03:28:17.673182 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:17.680000 audit[3462]: NETFILTER_CFG table=filter:117 family=2 entries=21 op=nft_register_rule pid=3462 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:28:17.682459 kernel: kauditd_printk_skb: 74 callbacks suppressed Dec 16 03:28:17.682820 kernel: audit: type=1325 audit(1765855697.680:559): table=filter:117 family=2 entries=21 op=nft_register_rule pid=3462 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:28:17.680000 audit[3462]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe6bf92d80 a2=0 a3=7ffe6bf92d6c items=0 ppid=2935 pid=3462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:17.694189 kernel: audit: type=1300 audit(1765855697.680:559): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe6bf92d80 a2=0 a3=7ffe6bf92d6c items=0 ppid=2935 pid=3462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:17.680000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:28:17.693000 audit[3462]: NETFILTER_CFG table=nat:118 family=2 entries=19 op=nft_register_chain pid=3462 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:28:17.699210 kernel: audit: type=1327 audit(1765855697.680:559): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:28:17.699384 kernel: audit: type=1325 audit(1765855697.693:560): table=nat:118 family=2 entries=19 op=nft_register_chain pid=3462 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:28:17.693000 audit[3462]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffe6bf92d80 a2=0 a3=7ffe6bf92d6c items=0 ppid=2935 pid=3462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:17.702791 kernel: audit: type=1300 audit(1765855697.693:560): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffe6bf92d80 a2=0 a3=7ffe6bf92d6c items=0 ppid=2935 pid=3462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:17.693000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:28:17.709105 kernel: audit: type=1327 audit(1765855697.693:560): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:28:18.590962 kubelet[2782]: E1216 03:28:18.589322 2782 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:28:18.629825 kubelet[2782]: E1216 03:28:18.629432 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:18.629825 kubelet[2782]: W1216 03:28:18.629759 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:18.629825 kubelet[2782]: E1216 03:28:18.629790 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:18.630784 kubelet[2782]: E1216 03:28:18.630415 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:18.630784 kubelet[2782]: W1216 03:28:18.630432 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:18.630784 kubelet[2782]: E1216 03:28:18.630709 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:18.631329 kubelet[2782]: E1216 03:28:18.631256 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:18.631530 kubelet[2782]: W1216 03:28:18.631410 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:18.631530 kubelet[2782]: E1216 03:28:18.631437 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:18.632107 kubelet[2782]: E1216 03:28:18.632046 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:18.632107 kubelet[2782]: W1216 03:28:18.632061 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:18.632678 kubelet[2782]: E1216 03:28:18.632075 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:18.633573 kubelet[2782]: E1216 03:28:18.633194 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:18.633573 kubelet[2782]: W1216 03:28:18.633397 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:18.633573 kubelet[2782]: E1216 03:28:18.633416 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:18.633911 kubelet[2782]: E1216 03:28:18.633785 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:18.633911 kubelet[2782]: W1216 03:28:18.633797 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:18.633911 kubelet[2782]: E1216 03:28:18.633811 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:18.634268 kubelet[2782]: E1216 03:28:18.634200 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:18.634268 kubelet[2782]: W1216 03:28:18.634213 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:18.634268 kubelet[2782]: E1216 03:28:18.634226 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:18.634973 kubelet[2782]: E1216 03:28:18.634946 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:18.635159 kubelet[2782]: W1216 03:28:18.635059 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:18.635159 kubelet[2782]: E1216 03:28:18.635077 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:18.635628 containerd[1596]: time="2025-12-16T03:28:18.635532189Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:28:18.636808 containerd[1596]: time="2025-12-16T03:28:18.636401019Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Dec 16 03:28:18.637081 kubelet[2782]: E1216 03:28:18.636930 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:18.637081 kubelet[2782]: W1216 03:28:18.636945 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:18.637081 kubelet[2782]: E1216 03:28:18.636964 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:18.637368 kubelet[2782]: E1216 03:28:18.637226 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:18.637368 kubelet[2782]: W1216 03:28:18.637236 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:18.637368 kubelet[2782]: E1216 03:28:18.637248 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:18.637850 kubelet[2782]: E1216 03:28:18.637750 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:18.637850 kubelet[2782]: W1216 03:28:18.637767 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:18.637850 kubelet[2782]: E1216 03:28:18.637780 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:18.638315 containerd[1596]: time="2025-12-16T03:28:18.638277135Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:28:18.638955 kubelet[2782]: E1216 03:28:18.638867 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:18.638955 kubelet[2782]: W1216 03:28:18.638882 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:18.638955 kubelet[2782]: E1216 03:28:18.638896 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:18.640911 containerd[1596]: time="2025-12-16T03:28:18.640630404Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:28:18.641028 kubelet[2782]: E1216 03:28:18.640740 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:18.641028 kubelet[2782]: W1216 03:28:18.640763 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:18.641028 kubelet[2782]: E1216 03:28:18.640780 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:18.641383 kubelet[2782]: E1216 03:28:18.641242 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:18.641383 kubelet[2782]: W1216 03:28:18.641254 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:18.641383 kubelet[2782]: E1216 03:28:18.641268 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:18.641674 kubelet[2782]: E1216 03:28:18.641564 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:18.641674 kubelet[2782]: W1216 03:28:18.641575 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:18.641674 kubelet[2782]: E1216 03:28:18.641586 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:18.642522 containerd[1596]: time="2025-12-16T03:28:18.642174769Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.773177739s" Dec 16 03:28:18.642522 containerd[1596]: time="2025-12-16T03:28:18.642241579Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Dec 16 03:28:18.662765 containerd[1596]: time="2025-12-16T03:28:18.662685841Z" level=info msg="CreateContainer within sandbox \"61abb08980ebe5e486176a4d4989d22967859019bc8cdee89f3d1f5e743be48c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 16 03:28:18.672511 kubelet[2782]: E1216 03:28:18.672457 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:18.672706 kubelet[2782]: W1216 03:28:18.672621 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:18.672706 kubelet[2782]: E1216 03:28:18.672657 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:18.674169 kubelet[2782]: E1216 03:28:18.673063 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:18.676114 kubelet[2782]: W1216 03:28:18.674530 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:18.676114 kubelet[2782]: E1216 03:28:18.674578 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:18.676295 containerd[1596]: time="2025-12-16T03:28:18.674736151Z" level=info msg="Container d7ea02a22daa9da47f88b465b68507e1fdc1eb9cca1e2d6319d6658f525ae0a0: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:28:18.678836 kubelet[2782]: E1216 03:28:18.678798 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:18.678836 kubelet[2782]: W1216 03:28:18.678831 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:18.679191 kubelet[2782]: E1216 03:28:18.678854 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:18.680191 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1214871313.mount: Deactivated successfully. Dec 16 03:28:18.681701 kubelet[2782]: E1216 03:28:18.681414 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:18.681701 kubelet[2782]: W1216 03:28:18.681451 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:18.681701 kubelet[2782]: E1216 03:28:18.681479 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:18.683484 kubelet[2782]: E1216 03:28:18.683226 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:18.683484 kubelet[2782]: W1216 03:28:18.683251 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:18.683484 kubelet[2782]: E1216 03:28:18.683275 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:18.683996 kubelet[2782]: E1216 03:28:18.683742 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:18.683996 kubelet[2782]: W1216 03:28:18.683768 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:18.683996 kubelet[2782]: E1216 03:28:18.683784 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:18.684342 kubelet[2782]: E1216 03:28:18.684320 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:18.684486 kubelet[2782]: W1216 03:28:18.684424 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:18.684486 kubelet[2782]: E1216 03:28:18.684440 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:18.684979 kubelet[2782]: E1216 03:28:18.684957 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:18.685121 kubelet[2782]: W1216 03:28:18.685052 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:18.685121 kubelet[2782]: E1216 03:28:18.685071 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:18.685522 kubelet[2782]: E1216 03:28:18.685486 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:18.685522 kubelet[2782]: W1216 03:28:18.685497 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:18.685522 kubelet[2782]: E1216 03:28:18.685509 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:18.685997 kubelet[2782]: E1216 03:28:18.685929 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:18.685997 kubelet[2782]: W1216 03:28:18.685941 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:18.685997 kubelet[2782]: E1216 03:28:18.685951 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:18.686312 kubelet[2782]: E1216 03:28:18.686299 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:18.686511 kubelet[2782]: W1216 03:28:18.686361 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:18.686511 kubelet[2782]: E1216 03:28:18.686375 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:18.687152 kubelet[2782]: E1216 03:28:18.687137 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:18.687293 kubelet[2782]: W1216 03:28:18.687228 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:18.687293 kubelet[2782]: E1216 03:28:18.687245 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:18.687757 kubelet[2782]: E1216 03:28:18.687720 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:18.687757 kubelet[2782]: W1216 03:28:18.687733 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:18.687757 kubelet[2782]: E1216 03:28:18.687745 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:18.688129 kubelet[2782]: E1216 03:28:18.688067 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:18.688129 kubelet[2782]: W1216 03:28:18.688077 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:18.688349 kubelet[2782]: E1216 03:28:18.688223 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:18.688647 kubelet[2782]: E1216 03:28:18.688635 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:18.688717 kubelet[2782]: W1216 03:28:18.688708 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:18.688800 kubelet[2782]: E1216 03:28:18.688755 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:18.689102 kubelet[2782]: E1216 03:28:18.689017 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:18.689102 kubelet[2782]: W1216 03:28:18.689028 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:18.689102 kubelet[2782]: E1216 03:28:18.689038 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:18.689825 kubelet[2782]: E1216 03:28:18.689530 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:18.689825 kubelet[2782]: W1216 03:28:18.689546 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:18.689825 kubelet[2782]: E1216 03:28:18.689558 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:18.690193 kubelet[2782]: E1216 03:28:18.690178 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:28:18.690346 kubelet[2782]: W1216 03:28:18.690326 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:28:18.690455 kubelet[2782]: E1216 03:28:18.690404 2782 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:28:18.696022 containerd[1596]: time="2025-12-16T03:28:18.695954830Z" level=info msg="CreateContainer within sandbox \"61abb08980ebe5e486176a4d4989d22967859019bc8cdee89f3d1f5e743be48c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d7ea02a22daa9da47f88b465b68507e1fdc1eb9cca1e2d6319d6658f525ae0a0\"" Dec 16 03:28:18.698125 containerd[1596]: time="2025-12-16T03:28:18.697914867Z" level=info msg="StartContainer for \"d7ea02a22daa9da47f88b465b68507e1fdc1eb9cca1e2d6319d6658f525ae0a0\"" Dec 16 03:28:18.707869 containerd[1596]: time="2025-12-16T03:28:18.707778317Z" level=info msg="connecting to shim d7ea02a22daa9da47f88b465b68507e1fdc1eb9cca1e2d6319d6658f525ae0a0" address="unix:///run/containerd/s/d8356615bef8c78ceb72ba915aba9f6f685e1d356682a76c1e63dc0f2371c5e3" protocol=ttrpc version=3 Dec 16 03:28:18.740837 systemd[1]: Started cri-containerd-d7ea02a22daa9da47f88b465b68507e1fdc1eb9cca1e2d6319d6658f525ae0a0.scope - libcontainer container d7ea02a22daa9da47f88b465b68507e1fdc1eb9cca1e2d6319d6658f525ae0a0. Dec 16 03:28:18.818000 audit: BPF prog-id=166 op=LOAD Dec 16 03:28:18.818000 audit[3500]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3304 pid=3500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:18.824547 kernel: audit: type=1334 audit(1765855698.818:561): prog-id=166 op=LOAD Dec 16 03:28:18.824697 kernel: audit: type=1300 audit(1765855698.818:561): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3304 pid=3500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:18.818000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437656130326132326461613964613437663838623436356236383530 Dec 16 03:28:18.820000 audit: BPF prog-id=167 op=LOAD Dec 16 03:28:18.833402 kernel: audit: type=1327 audit(1765855698.818:561): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437656130326132326461613964613437663838623436356236383530 Dec 16 03:28:18.833613 kernel: audit: type=1334 audit(1765855698.820:562): prog-id=167 op=LOAD Dec 16 03:28:18.820000 audit[3500]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3304 pid=3500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:18.820000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437656130326132326461613964613437663838623436356236383530 Dec 16 03:28:18.820000 audit: BPF prog-id=167 op=UNLOAD Dec 16 03:28:18.820000 audit[3500]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3304 pid=3500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:18.820000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437656130326132326461613964613437663838623436356236383530 Dec 16 03:28:18.820000 audit: BPF prog-id=166 op=UNLOAD Dec 16 03:28:18.820000 audit[3500]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3304 pid=3500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:18.820000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437656130326132326461613964613437663838623436356236383530 Dec 16 03:28:18.820000 audit: BPF prog-id=168 op=LOAD Dec 16 03:28:18.820000 audit[3500]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3304 pid=3500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:18.820000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437656130326132326461613964613437663838623436356236383530 Dec 16 03:28:18.863783 containerd[1596]: time="2025-12-16T03:28:18.863653382Z" level=info msg="StartContainer for \"d7ea02a22daa9da47f88b465b68507e1fdc1eb9cca1e2d6319d6658f525ae0a0\" returns successfully" Dec 16 03:28:18.885764 systemd[1]: cri-containerd-d7ea02a22daa9da47f88b465b68507e1fdc1eb9cca1e2d6319d6658f525ae0a0.scope: Deactivated successfully. Dec 16 03:28:18.888000 audit: BPF prog-id=168 op=UNLOAD Dec 16 03:28:18.919893 containerd[1596]: time="2025-12-16T03:28:18.919700529Z" level=info msg="received container exit event container_id:\"d7ea02a22daa9da47f88b465b68507e1fdc1eb9cca1e2d6319d6658f525ae0a0\" id:\"d7ea02a22daa9da47f88b465b68507e1fdc1eb9cca1e2d6319d6658f525ae0a0\" pid:3513 exited_at:{seconds:1765855698 nanos:888223952}" Dec 16 03:28:18.953059 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d7ea02a22daa9da47f88b465b68507e1fdc1eb9cca1e2d6319d6658f525ae0a0-rootfs.mount: Deactivated successfully. Dec 16 03:28:19.401210 kubelet[2782]: E1216 03:28:19.401065 2782 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hrg2x" podUID="8744312e-a06c-4ec6-97fa-99683d819e93" Dec 16 03:28:19.594266 kubelet[2782]: E1216 03:28:19.593756 2782 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:28:19.594266 kubelet[2782]: E1216 03:28:19.593971 2782 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:28:19.598107 containerd[1596]: time="2025-12-16T03:28:19.597466136Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 16 03:28:21.400286 kubelet[2782]: E1216 03:28:21.400225 2782 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hrg2x" podUID="8744312e-a06c-4ec6-97fa-99683d819e93" Dec 16 03:28:23.401356 kubelet[2782]: E1216 03:28:23.401297 2782 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hrg2x" podUID="8744312e-a06c-4ec6-97fa-99683d819e93" Dec 16 03:28:23.614741 containerd[1596]: time="2025-12-16T03:28:23.614339770Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:28:23.615277 containerd[1596]: time="2025-12-16T03:28:23.614898012Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Dec 16 03:28:23.616151 containerd[1596]: time="2025-12-16T03:28:23.616120508Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:28:23.619362 containerd[1596]: time="2025-12-16T03:28:23.619233317Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:28:23.620828 containerd[1596]: time="2025-12-16T03:28:23.620512066Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 4.022988801s" Dec 16 03:28:23.620967 containerd[1596]: time="2025-12-16T03:28:23.620823291Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Dec 16 03:28:23.626292 containerd[1596]: time="2025-12-16T03:28:23.626038918Z" level=info msg="CreateContainer within sandbox \"61abb08980ebe5e486176a4d4989d22967859019bc8cdee89f3d1f5e743be48c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 16 03:28:23.639883 containerd[1596]: time="2025-12-16T03:28:23.639439189Z" level=info msg="Container 6417a404c91c50f1ab25b680e1a198ced2faf372c9ee79b621e52a2ab054cf5c: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:28:23.644726 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1262122846.mount: Deactivated successfully. Dec 16 03:28:23.658987 containerd[1596]: time="2025-12-16T03:28:23.658796435Z" level=info msg="CreateContainer within sandbox \"61abb08980ebe5e486176a4d4989d22967859019bc8cdee89f3d1f5e743be48c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"6417a404c91c50f1ab25b680e1a198ced2faf372c9ee79b621e52a2ab054cf5c\"" Dec 16 03:28:23.660381 containerd[1596]: time="2025-12-16T03:28:23.660336101Z" level=info msg="StartContainer for \"6417a404c91c50f1ab25b680e1a198ced2faf372c9ee79b621e52a2ab054cf5c\"" Dec 16 03:28:23.675074 containerd[1596]: time="2025-12-16T03:28:23.674946316Z" level=info msg="connecting to shim 6417a404c91c50f1ab25b680e1a198ced2faf372c9ee79b621e52a2ab054cf5c" address="unix:///run/containerd/s/d8356615bef8c78ceb72ba915aba9f6f685e1d356682a76c1e63dc0f2371c5e3" protocol=ttrpc version=3 Dec 16 03:28:23.719507 systemd[1]: Started cri-containerd-6417a404c91c50f1ab25b680e1a198ced2faf372c9ee79b621e52a2ab054cf5c.scope - libcontainer container 6417a404c91c50f1ab25b680e1a198ced2faf372c9ee79b621e52a2ab054cf5c. Dec 16 03:28:23.806029 kernel: kauditd_printk_skb: 12 callbacks suppressed Dec 16 03:28:23.806275 kernel: audit: type=1334 audit(1765855703.800:567): prog-id=169 op=LOAD Dec 16 03:28:23.800000 audit: BPF prog-id=169 op=LOAD Dec 16 03:28:23.800000 audit[3561]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3304 pid=3561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:23.800000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634313761343034633931633530663161623235623638306531613139 Dec 16 03:28:23.814985 kernel: audit: type=1300 audit(1765855703.800:567): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3304 pid=3561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:23.815126 kernel: audit: type=1327 audit(1765855703.800:567): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634313761343034633931633530663161623235623638306531613139 Dec 16 03:28:23.805000 audit: BPF prog-id=170 op=LOAD Dec 16 03:28:23.818807 kernel: audit: type=1334 audit(1765855703.805:568): prog-id=170 op=LOAD Dec 16 03:28:23.805000 audit[3561]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3304 pid=3561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:23.826261 kernel: audit: type=1300 audit(1765855703.805:568): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3304 pid=3561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:23.826457 kernel: audit: type=1327 audit(1765855703.805:568): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634313761343034633931633530663161623235623638306531613139 Dec 16 03:28:23.805000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634313761343034633931633530663161623235623638306531613139 Dec 16 03:28:23.805000 audit: BPF prog-id=170 op=UNLOAD Dec 16 03:28:23.832678 kernel: audit: type=1334 audit(1765855703.805:569): prog-id=170 op=UNLOAD Dec 16 03:28:23.805000 audit[3561]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3304 pid=3561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:23.835845 kernel: audit: type=1300 audit(1765855703.805:569): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3304 pid=3561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:23.805000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634313761343034633931633530663161623235623638306531613139 Dec 16 03:28:23.842983 kernel: audit: type=1327 audit(1765855703.805:569): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634313761343034633931633530663161623235623638306531613139 Dec 16 03:28:23.805000 audit: BPF prog-id=169 op=UNLOAD Dec 16 03:28:23.848147 kernel: audit: type=1334 audit(1765855703.805:570): prog-id=169 op=UNLOAD Dec 16 03:28:23.805000 audit[3561]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3304 pid=3561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:23.805000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634313761343034633931633530663161623235623638306531613139 Dec 16 03:28:23.805000 audit: BPF prog-id=171 op=LOAD Dec 16 03:28:23.805000 audit[3561]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3304 pid=3561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:23.805000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634313761343034633931633530663161623235623638306531613139 Dec 16 03:28:23.892777 containerd[1596]: time="2025-12-16T03:28:23.892297115Z" level=info msg="StartContainer for \"6417a404c91c50f1ab25b680e1a198ced2faf372c9ee79b621e52a2ab054cf5c\" returns successfully" Dec 16 03:28:24.621068 systemd[1]: cri-containerd-6417a404c91c50f1ab25b680e1a198ced2faf372c9ee79b621e52a2ab054cf5c.scope: Deactivated successfully. Dec 16 03:28:24.622485 systemd[1]: cri-containerd-6417a404c91c50f1ab25b680e1a198ced2faf372c9ee79b621e52a2ab054cf5c.scope: Consumed 761ms CPU time, 170.7M memory peak, 9.2M read from disk, 171.3M written to disk. Dec 16 03:28:24.626000 audit: BPF prog-id=171 op=UNLOAD Dec 16 03:28:24.633591 kubelet[2782]: E1216 03:28:24.633522 2782 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:28:24.659125 containerd[1596]: time="2025-12-16T03:28:24.658390326Z" level=info msg="received container exit event container_id:\"6417a404c91c50f1ab25b680e1a198ced2faf372c9ee79b621e52a2ab054cf5c\" id:\"6417a404c91c50f1ab25b680e1a198ced2faf372c9ee79b621e52a2ab054cf5c\" pid:3575 exited_at:{seconds:1765855704 nanos:657686834}" Dec 16 03:28:24.721340 kubelet[2782]: I1216 03:28:24.719515 2782 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Dec 16 03:28:24.733618 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6417a404c91c50f1ab25b680e1a198ced2faf372c9ee79b621e52a2ab054cf5c-rootfs.mount: Deactivated successfully. Dec 16 03:28:24.809575 systemd[1]: Created slice kubepods-burstable-pod4bd5b5ce_0952_4c36_9327_5163395763f4.slice - libcontainer container kubepods-burstable-pod4bd5b5ce_0952_4c36_9327_5163395763f4.slice. Dec 16 03:28:24.831121 kubelet[2782]: I1216 03:28:24.830340 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4bd5b5ce-0952-4c36-9327-5163395763f4-config-volume\") pod \"coredns-66bc5c9577-hphzw\" (UID: \"4bd5b5ce-0952-4c36-9327-5163395763f4\") " pod="kube-system/coredns-66bc5c9577-hphzw" Dec 16 03:28:24.831512 kubelet[2782]: I1216 03:28:24.831481 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgbpb\" (UniqueName: \"kubernetes.io/projected/4bd5b5ce-0952-4c36-9327-5163395763f4-kube-api-access-tgbpb\") pod \"coredns-66bc5c9577-hphzw\" (UID: \"4bd5b5ce-0952-4c36-9327-5163395763f4\") " pod="kube-system/coredns-66bc5c9577-hphzw" Dec 16 03:28:24.831839 kubelet[2782]: I1216 03:28:24.831524 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cb31b945-d473-43bf-aec2-f1956131d323-whisker-backend-key-pair\") pod \"whisker-5f76595f4d-8k6bg\" (UID: \"cb31b945-d473-43bf-aec2-f1956131d323\") " pod="calico-system/whisker-5f76595f4d-8k6bg" Dec 16 03:28:24.831839 kubelet[2782]: I1216 03:28:24.831548 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vgzh\" (UniqueName: \"kubernetes.io/projected/cb31b945-d473-43bf-aec2-f1956131d323-kube-api-access-6vgzh\") pod \"whisker-5f76595f4d-8k6bg\" (UID: \"cb31b945-d473-43bf-aec2-f1956131d323\") " pod="calico-system/whisker-5f76595f4d-8k6bg" Dec 16 03:28:24.831839 kubelet[2782]: I1216 03:28:24.831579 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb31b945-d473-43bf-aec2-f1956131d323-whisker-ca-bundle\") pod \"whisker-5f76595f4d-8k6bg\" (UID: \"cb31b945-d473-43bf-aec2-f1956131d323\") " pod="calico-system/whisker-5f76595f4d-8k6bg" Dec 16 03:28:24.833545 systemd[1]: Created slice kubepods-besteffort-podcb31b945_d473_43bf_aec2_f1956131d323.slice - libcontainer container kubepods-besteffort-podcb31b945_d473_43bf_aec2_f1956131d323.slice. Dec 16 03:28:24.843259 systemd[1]: Created slice kubepods-burstable-pod4c0d8b2a_7296_491d_a267_2b8116b5d172.slice - libcontainer container kubepods-burstable-pod4c0d8b2a_7296_491d_a267_2b8116b5d172.slice. Dec 16 03:28:24.859226 systemd[1]: Created slice kubepods-besteffort-pod34973ba8_5ee2_4cc3_a8d0_65270e641be0.slice - libcontainer container kubepods-besteffort-pod34973ba8_5ee2_4cc3_a8d0_65270e641be0.slice. Dec 16 03:28:24.872280 systemd[1]: Created slice kubepods-besteffort-podcff3841d_c916_42d7_ba21_c96ff077d2f0.slice - libcontainer container kubepods-besteffort-podcff3841d_c916_42d7_ba21_c96ff077d2f0.slice. Dec 16 03:28:24.887591 systemd[1]: Created slice kubepods-besteffort-pod49d300b0_388a_4cb6_b194_17855a6b768b.slice - libcontainer container kubepods-besteffort-pod49d300b0_388a_4cb6_b194_17855a6b768b.slice. Dec 16 03:28:24.901871 systemd[1]: Created slice kubepods-besteffort-podfb012d08_2ffd_46d4_bf1b_e4743471acfd.slice - libcontainer container kubepods-besteffort-podfb012d08_2ffd_46d4_bf1b_e4743471acfd.slice. Dec 16 03:28:24.933458 kubelet[2782]: I1216 03:28:24.932398 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4c0d8b2a-7296-491d-a267-2b8116b5d172-config-volume\") pod \"coredns-66bc5c9577-glpm2\" (UID: \"4c0d8b2a-7296-491d-a267-2b8116b5d172\") " pod="kube-system/coredns-66bc5c9577-glpm2" Dec 16 03:28:24.933458 kubelet[2782]: I1216 03:28:24.932460 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/49d300b0-388a-4cb6-b194-17855a6b768b-calico-apiserver-certs\") pod \"calico-apiserver-699cff98b-rrdrk\" (UID: \"49d300b0-388a-4cb6-b194-17855a6b768b\") " pod="calico-apiserver/calico-apiserver-699cff98b-rrdrk" Dec 16 03:28:24.933681 kubelet[2782]: I1216 03:28:24.933567 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34973ba8-5ee2-4cc3-a8d0-65270e641be0-tigera-ca-bundle\") pod \"calico-kube-controllers-d6f5f89f8-mz87q\" (UID: \"34973ba8-5ee2-4cc3-a8d0-65270e641be0\") " pod="calico-system/calico-kube-controllers-d6f5f89f8-mz87q" Dec 16 03:28:24.933681 kubelet[2782]: I1216 03:28:24.933663 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6mz7\" (UniqueName: \"kubernetes.io/projected/fb012d08-2ffd-46d4-bf1b-e4743471acfd-kube-api-access-h6mz7\") pod \"calico-apiserver-699cff98b-z4b6s\" (UID: \"fb012d08-2ffd-46d4-bf1b-e4743471acfd\") " pod="calico-apiserver/calico-apiserver-699cff98b-z4b6s" Dec 16 03:28:24.933744 kubelet[2782]: I1216 03:28:24.933691 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cff3841d-c916-42d7-ba21-c96ff077d2f0-config\") pod \"goldmane-7c778bb748-mx7nf\" (UID: \"cff3841d-c916-42d7-ba21-c96ff077d2f0\") " pod="calico-system/goldmane-7c778bb748-mx7nf" Dec 16 03:28:24.933744 kubelet[2782]: I1216 03:28:24.933716 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cff3841d-c916-42d7-ba21-c96ff077d2f0-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-mx7nf\" (UID: \"cff3841d-c916-42d7-ba21-c96ff077d2f0\") " pod="calico-system/goldmane-7c778bb748-mx7nf" Dec 16 03:28:24.933800 kubelet[2782]: I1216 03:28:24.933739 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/cff3841d-c916-42d7-ba21-c96ff077d2f0-goldmane-key-pair\") pod \"goldmane-7c778bb748-mx7nf\" (UID: \"cff3841d-c916-42d7-ba21-c96ff077d2f0\") " pod="calico-system/goldmane-7c778bb748-mx7nf" Dec 16 03:28:24.933800 kubelet[2782]: I1216 03:28:24.933776 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/fb012d08-2ffd-46d4-bf1b-e4743471acfd-calico-apiserver-certs\") pod \"calico-apiserver-699cff98b-z4b6s\" (UID: \"fb012d08-2ffd-46d4-bf1b-e4743471acfd\") " pod="calico-apiserver/calico-apiserver-699cff98b-z4b6s" Dec 16 03:28:24.933888 kubelet[2782]: I1216 03:28:24.933803 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9454\" (UniqueName: \"kubernetes.io/projected/4c0d8b2a-7296-491d-a267-2b8116b5d172-kube-api-access-c9454\") pod \"coredns-66bc5c9577-glpm2\" (UID: \"4c0d8b2a-7296-491d-a267-2b8116b5d172\") " pod="kube-system/coredns-66bc5c9577-glpm2" Dec 16 03:28:24.933888 kubelet[2782]: I1216 03:28:24.933830 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5zng\" (UniqueName: \"kubernetes.io/projected/49d300b0-388a-4cb6-b194-17855a6b768b-kube-api-access-x5zng\") pod \"calico-apiserver-699cff98b-rrdrk\" (UID: \"49d300b0-388a-4cb6-b194-17855a6b768b\") " pod="calico-apiserver/calico-apiserver-699cff98b-rrdrk" Dec 16 03:28:24.933888 kubelet[2782]: I1216 03:28:24.933881 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkrlw\" (UniqueName: \"kubernetes.io/projected/34973ba8-5ee2-4cc3-a8d0-65270e641be0-kube-api-access-mkrlw\") pod \"calico-kube-controllers-d6f5f89f8-mz87q\" (UID: \"34973ba8-5ee2-4cc3-a8d0-65270e641be0\") " pod="calico-system/calico-kube-controllers-d6f5f89f8-mz87q" Dec 16 03:28:24.933969 kubelet[2782]: I1216 03:28:24.933907 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkvsj\" (UniqueName: \"kubernetes.io/projected/cff3841d-c916-42d7-ba21-c96ff077d2f0-kube-api-access-pkvsj\") pod \"goldmane-7c778bb748-mx7nf\" (UID: \"cff3841d-c916-42d7-ba21-c96ff077d2f0\") " pod="calico-system/goldmane-7c778bb748-mx7nf" Dec 16 03:28:25.125172 kubelet[2782]: E1216 03:28:25.124955 2782 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:28:25.126812 containerd[1596]: time="2025-12-16T03:28:25.126706010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-hphzw,Uid:4bd5b5ce-0952-4c36-9327-5163395763f4,Namespace:kube-system,Attempt:0,}" Dec 16 03:28:25.141439 containerd[1596]: time="2025-12-16T03:28:25.141031034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f76595f4d-8k6bg,Uid:cb31b945-d473-43bf-aec2-f1956131d323,Namespace:calico-system,Attempt:0,}" Dec 16 03:28:25.154967 kubelet[2782]: E1216 03:28:25.154732 2782 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:28:25.165774 containerd[1596]: time="2025-12-16T03:28:25.165705705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-glpm2,Uid:4c0d8b2a-7296-491d-a267-2b8116b5d172,Namespace:kube-system,Attempt:0,}" Dec 16 03:28:25.170907 containerd[1596]: time="2025-12-16T03:28:25.170861444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d6f5f89f8-mz87q,Uid:34973ba8-5ee2-4cc3-a8d0-65270e641be0,Namespace:calico-system,Attempt:0,}" Dec 16 03:28:25.184910 containerd[1596]: time="2025-12-16T03:28:25.184849150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-mx7nf,Uid:cff3841d-c916-42d7-ba21-c96ff077d2f0,Namespace:calico-system,Attempt:0,}" Dec 16 03:28:25.198509 containerd[1596]: time="2025-12-16T03:28:25.198463599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-699cff98b-rrdrk,Uid:49d300b0-388a-4cb6-b194-17855a6b768b,Namespace:calico-apiserver,Attempt:0,}" Dec 16 03:28:25.220407 containerd[1596]: time="2025-12-16T03:28:25.220296165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-699cff98b-z4b6s,Uid:fb012d08-2ffd-46d4-bf1b-e4743471acfd,Namespace:calico-apiserver,Attempt:0,}" Dec 16 03:28:25.411579 systemd[1]: Created slice kubepods-besteffort-pod8744312e_a06c_4ec6_97fa_99683d819e93.slice - libcontainer container kubepods-besteffort-pod8744312e_a06c_4ec6_97fa_99683d819e93.slice. Dec 16 03:28:25.420245 containerd[1596]: time="2025-12-16T03:28:25.420201518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hrg2x,Uid:8744312e-a06c-4ec6-97fa-99683d819e93,Namespace:calico-system,Attempt:0,}" Dec 16 03:28:25.585533 containerd[1596]: time="2025-12-16T03:28:25.585400363Z" level=error msg="Failed to destroy network for sandbox \"ac90ba89844d0ae4f50753823131eec02d30ef4191298f30f299b10f39da50c5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:28:25.589213 containerd[1596]: time="2025-12-16T03:28:25.589138135Z" level=error msg="Failed to destroy network for sandbox \"4d40016c2bb7b8fa9beeef61ab1883876aff71386fbef2ac770d24e7cbb13ebc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:28:25.592372 containerd[1596]: time="2025-12-16T03:28:25.592241884Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-glpm2,Uid:4c0d8b2a-7296-491d-a267-2b8116b5d172,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac90ba89844d0ae4f50753823131eec02d30ef4191298f30f299b10f39da50c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:28:25.596831 kubelet[2782]: E1216 03:28:25.596725 2782 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac90ba89844d0ae4f50753823131eec02d30ef4191298f30f299b10f39da50c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:28:25.597531 containerd[1596]: time="2025-12-16T03:28:25.597211093Z" level=error msg="Failed to destroy network for sandbox \"6dd6c984ad65986374337c8c385f35445eab550e25c16f4e92e550e47665de2d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:28:25.597669 kubelet[2782]: E1216 03:28:25.597367 2782 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac90ba89844d0ae4f50753823131eec02d30ef4191298f30f299b10f39da50c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-glpm2" Dec 16 03:28:25.597669 kubelet[2782]: E1216 03:28:25.597407 2782 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac90ba89844d0ae4f50753823131eec02d30ef4191298f30f299b10f39da50c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-glpm2" Dec 16 03:28:25.598200 kubelet[2782]: E1216 03:28:25.597900 2782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-glpm2_kube-system(4c0d8b2a-7296-491d-a267-2b8116b5d172)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-glpm2_kube-system(4c0d8b2a-7296-491d-a267-2b8116b5d172)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ac90ba89844d0ae4f50753823131eec02d30ef4191298f30f299b10f39da50c5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-glpm2" podUID="4c0d8b2a-7296-491d-a267-2b8116b5d172" Dec 16 03:28:25.603642 containerd[1596]: time="2025-12-16T03:28:25.603441721Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-hphzw,Uid:4bd5b5ce-0952-4c36-9327-5163395763f4,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d40016c2bb7b8fa9beeef61ab1883876aff71386fbef2ac770d24e7cbb13ebc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:28:25.612801 kubelet[2782]: E1216 03:28:25.611965 2782 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d40016c2bb7b8fa9beeef61ab1883876aff71386fbef2ac770d24e7cbb13ebc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:28:25.612801 kubelet[2782]: E1216 03:28:25.612050 2782 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d40016c2bb7b8fa9beeef61ab1883876aff71386fbef2ac770d24e7cbb13ebc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-hphzw" Dec 16 03:28:25.612801 kubelet[2782]: E1216 03:28:25.612079 2782 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d40016c2bb7b8fa9beeef61ab1883876aff71386fbef2ac770d24e7cbb13ebc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-hphzw" Dec 16 03:28:25.613235 kubelet[2782]: E1216 03:28:25.612264 2782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-hphzw_kube-system(4bd5b5ce-0952-4c36-9327-5163395763f4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-hphzw_kube-system(4bd5b5ce-0952-4c36-9327-5163395763f4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4d40016c2bb7b8fa9beeef61ab1883876aff71386fbef2ac770d24e7cbb13ebc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-hphzw" podUID="4bd5b5ce-0952-4c36-9327-5163395763f4" Dec 16 03:28:25.615066 containerd[1596]: time="2025-12-16T03:28:25.614981426Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d6f5f89f8-mz87q,Uid:34973ba8-5ee2-4cc3-a8d0-65270e641be0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6dd6c984ad65986374337c8c385f35445eab550e25c16f4e92e550e47665de2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:28:25.618791 kubelet[2782]: E1216 03:28:25.617917 2782 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6dd6c984ad65986374337c8c385f35445eab550e25c16f4e92e550e47665de2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:28:25.618791 kubelet[2782]: E1216 03:28:25.618032 2782 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6dd6c984ad65986374337c8c385f35445eab550e25c16f4e92e550e47665de2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-d6f5f89f8-mz87q" Dec 16 03:28:25.618791 kubelet[2782]: E1216 03:28:25.618084 2782 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6dd6c984ad65986374337c8c385f35445eab550e25c16f4e92e550e47665de2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-d6f5f89f8-mz87q" Dec 16 03:28:25.619233 kubelet[2782]: E1216 03:28:25.618454 2782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-d6f5f89f8-mz87q_calico-system(34973ba8-5ee2-4cc3-a8d0-65270e641be0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-d6f5f89f8-mz87q_calico-system(34973ba8-5ee2-4cc3-a8d0-65270e641be0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6dd6c984ad65986374337c8c385f35445eab550e25c16f4e92e550e47665de2d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-d6f5f89f8-mz87q" podUID="34973ba8-5ee2-4cc3-a8d0-65270e641be0" Dec 16 03:28:25.622062 containerd[1596]: time="2025-12-16T03:28:25.621998019Z" level=error msg="Failed to destroy network for sandbox \"53aeff4e4291b4b13e0f11221dc9af9ba1de2215cde34ebbb0211def5e11ece9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:28:25.623070 containerd[1596]: time="2025-12-16T03:28:25.623025961Z" level=error msg="Failed to destroy network for sandbox \"be682d85bd1e7d60e72b3a817a3b26481fd41bace61e8f55c2ec8726c9fc9707\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:28:25.646263 containerd[1596]: time="2025-12-16T03:28:25.646069126Z" level=error msg="Failed to destroy network for sandbox \"31144f18f8f20f63c8da75c3c9f6a5af8afdd60830b394d40f572dea24bdf2de\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:28:25.646712 containerd[1596]: time="2025-12-16T03:28:25.646262833Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-699cff98b-rrdrk,Uid:49d300b0-388a-4cb6-b194-17855a6b768b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"53aeff4e4291b4b13e0f11221dc9af9ba1de2215cde34ebbb0211def5e11ece9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:28:25.647177 kubelet[2782]: E1216 03:28:25.647115 2782 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53aeff4e4291b4b13e0f11221dc9af9ba1de2215cde34ebbb0211def5e11ece9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:28:25.648414 kubelet[2782]: E1216 03:28:25.647175 2782 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53aeff4e4291b4b13e0f11221dc9af9ba1de2215cde34ebbb0211def5e11ece9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-699cff98b-rrdrk" Dec 16 03:28:25.648414 kubelet[2782]: E1216 03:28:25.647202 2782 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53aeff4e4291b4b13e0f11221dc9af9ba1de2215cde34ebbb0211def5e11ece9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-699cff98b-rrdrk" Dec 16 03:28:25.648414 kubelet[2782]: E1216 03:28:25.647263 2782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-699cff98b-rrdrk_calico-apiserver(49d300b0-388a-4cb6-b194-17855a6b768b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-699cff98b-rrdrk_calico-apiserver(49d300b0-388a-4cb6-b194-17855a6b768b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"53aeff4e4291b4b13e0f11221dc9af9ba1de2215cde34ebbb0211def5e11ece9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-699cff98b-rrdrk" podUID="49d300b0-388a-4cb6-b194-17855a6b768b" Dec 16 03:28:25.652288 containerd[1596]: time="2025-12-16T03:28:25.651891017Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f76595f4d-8k6bg,Uid:cb31b945-d473-43bf-aec2-f1956131d323,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"be682d85bd1e7d60e72b3a817a3b26481fd41bace61e8f55c2ec8726c9fc9707\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:28:25.654142 kubelet[2782]: E1216 03:28:25.654019 2782 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:28:25.654403 kubelet[2782]: E1216 03:28:25.654280 2782 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be682d85bd1e7d60e72b3a817a3b26481fd41bace61e8f55c2ec8726c9fc9707\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:28:25.655072 kubelet[2782]: E1216 03:28:25.654525 2782 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be682d85bd1e7d60e72b3a817a3b26481fd41bace61e8f55c2ec8726c9fc9707\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5f76595f4d-8k6bg" Dec 16 03:28:25.655072 kubelet[2782]: E1216 03:28:25.654562 2782 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be682d85bd1e7d60e72b3a817a3b26481fd41bace61e8f55c2ec8726c9fc9707\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5f76595f4d-8k6bg" Dec 16 03:28:25.655072 kubelet[2782]: E1216 03:28:25.654728 2782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5f76595f4d-8k6bg_calico-system(cb31b945-d473-43bf-aec2-f1956131d323)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5f76595f4d-8k6bg_calico-system(cb31b945-d473-43bf-aec2-f1956131d323)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"be682d85bd1e7d60e72b3a817a3b26481fd41bace61e8f55c2ec8726c9fc9707\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5f76595f4d-8k6bg" podUID="cb31b945-d473-43bf-aec2-f1956131d323" Dec 16 03:28:25.658168 containerd[1596]: time="2025-12-16T03:28:25.658024746Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-mx7nf,Uid:cff3841d-c916-42d7-ba21-c96ff077d2f0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"31144f18f8f20f63c8da75c3c9f6a5af8afdd60830b394d40f572dea24bdf2de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:28:25.660442 kubelet[2782]: E1216 03:28:25.659877 2782 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31144f18f8f20f63c8da75c3c9f6a5af8afdd60830b394d40f572dea24bdf2de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:28:25.660442 kubelet[2782]: E1216 03:28:25.660001 2782 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31144f18f8f20f63c8da75c3c9f6a5af8afdd60830b394d40f572dea24bdf2de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-mx7nf" Dec 16 03:28:25.661337 kubelet[2782]: E1216 03:28:25.661023 2782 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31144f18f8f20f63c8da75c3c9f6a5af8afdd60830b394d40f572dea24bdf2de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-mx7nf" Dec 16 03:28:25.661709 kubelet[2782]: E1216 03:28:25.661565 2782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-mx7nf_calico-system(cff3841d-c916-42d7-ba21-c96ff077d2f0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-mx7nf_calico-system(cff3841d-c916-42d7-ba21-c96ff077d2f0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"31144f18f8f20f63c8da75c3c9f6a5af8afdd60830b394d40f572dea24bdf2de\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-mx7nf" podUID="cff3841d-c916-42d7-ba21-c96ff077d2f0" Dec 16 03:28:25.682159 containerd[1596]: time="2025-12-16T03:28:25.678923802Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 16 03:28:25.695241 containerd[1596]: time="2025-12-16T03:28:25.695181575Z" level=error msg="Failed to destroy network for sandbox \"0da283cb9a8171685f2ac860aef1f012292e2c77924a0ac348b29c191b4949e7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:28:25.703250 containerd[1596]: time="2025-12-16T03:28:25.702975499Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-699cff98b-z4b6s,Uid:fb012d08-2ffd-46d4-bf1b-e4743471acfd,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0da283cb9a8171685f2ac860aef1f012292e2c77924a0ac348b29c191b4949e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:28:25.705307 kubelet[2782]: E1216 03:28:25.705252 2782 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0da283cb9a8171685f2ac860aef1f012292e2c77924a0ac348b29c191b4949e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:28:25.705444 kubelet[2782]: E1216 03:28:25.705332 2782 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0da283cb9a8171685f2ac860aef1f012292e2c77924a0ac348b29c191b4949e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-699cff98b-z4b6s" Dec 16 03:28:25.705444 kubelet[2782]: E1216 03:28:25.705358 2782 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0da283cb9a8171685f2ac860aef1f012292e2c77924a0ac348b29c191b4949e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-699cff98b-z4b6s" Dec 16 03:28:25.705538 kubelet[2782]: E1216 03:28:25.705438 2782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-699cff98b-z4b6s_calico-apiserver(fb012d08-2ffd-46d4-bf1b-e4743471acfd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-699cff98b-z4b6s_calico-apiserver(fb012d08-2ffd-46d4-bf1b-e4743471acfd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0da283cb9a8171685f2ac860aef1f012292e2c77924a0ac348b29c191b4949e7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-699cff98b-z4b6s" podUID="fb012d08-2ffd-46d4-bf1b-e4743471acfd" Dec 16 03:28:25.761151 containerd[1596]: time="2025-12-16T03:28:25.761038088Z" level=error msg="Failed to destroy network for sandbox \"ff287bdd86832592ee0b8fa3723d48ca0a61272967ec871f43910773382b7b05\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:28:25.764438 containerd[1596]: time="2025-12-16T03:28:25.764361851Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hrg2x,Uid:8744312e-a06c-4ec6-97fa-99683d819e93,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff287bdd86832592ee0b8fa3723d48ca0a61272967ec871f43910773382b7b05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:28:25.766122 kubelet[2782]: E1216 03:28:25.764697 2782 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff287bdd86832592ee0b8fa3723d48ca0a61272967ec871f43910773382b7b05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:28:25.766122 kubelet[2782]: E1216 03:28:25.764801 2782 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff287bdd86832592ee0b8fa3723d48ca0a61272967ec871f43910773382b7b05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hrg2x" Dec 16 03:28:25.766122 kubelet[2782]: E1216 03:28:25.764829 2782 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff287bdd86832592ee0b8fa3723d48ca0a61272967ec871f43910773382b7b05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hrg2x" Dec 16 03:28:25.765955 systemd[1]: run-netns-cni\x2decf7eeba\x2d9ddd\x2d6dd2\x2d4223\x2dcf7f9d513d1b.mount: Deactivated successfully. Dec 16 03:28:25.766919 kubelet[2782]: E1216 03:28:25.764904 2782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hrg2x_calico-system(8744312e-a06c-4ec6-97fa-99683d819e93)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hrg2x_calico-system(8744312e-a06c-4ec6-97fa-99683d819e93)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ff287bdd86832592ee0b8fa3723d48ca0a61272967ec871f43910773382b7b05\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hrg2x" podUID="8744312e-a06c-4ec6-97fa-99683d819e93" Dec 16 03:28:33.571343 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3006997937.mount: Deactivated successfully. Dec 16 03:28:33.637349 containerd[1596]: time="2025-12-16T03:28:33.619081916Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:28:33.639475 containerd[1596]: time="2025-12-16T03:28:33.639175540Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:28:33.643890 containerd[1596]: time="2025-12-16T03:28:33.643806150Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Dec 16 03:28:33.644552 containerd[1596]: time="2025-12-16T03:28:33.644499274Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 7.960684965s" Dec 16 03:28:33.644552 containerd[1596]: time="2025-12-16T03:28:33.644556136Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Dec 16 03:28:33.644828 containerd[1596]: time="2025-12-16T03:28:33.644793150Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:28:33.675310 containerd[1596]: time="2025-12-16T03:28:33.675245452Z" level=info msg="CreateContainer within sandbox \"61abb08980ebe5e486176a4d4989d22967859019bc8cdee89f3d1f5e743be48c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 16 03:28:33.732527 containerd[1596]: time="2025-12-16T03:28:33.732421523Z" level=info msg="Container ab9aa25c9ca176d6d3d1e07cc32a8ba79211fadc965fd39f7473eb262fc7a996: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:28:33.733911 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2657456366.mount: Deactivated successfully. Dec 16 03:28:33.792914 containerd[1596]: time="2025-12-16T03:28:33.792837488Z" level=info msg="CreateContainer within sandbox \"61abb08980ebe5e486176a4d4989d22967859019bc8cdee89f3d1f5e743be48c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"ab9aa25c9ca176d6d3d1e07cc32a8ba79211fadc965fd39f7473eb262fc7a996\"" Dec 16 03:28:33.794124 containerd[1596]: time="2025-12-16T03:28:33.794036491Z" level=info msg="StartContainer for \"ab9aa25c9ca176d6d3d1e07cc32a8ba79211fadc965fd39f7473eb262fc7a996\"" Dec 16 03:28:33.796499 containerd[1596]: time="2025-12-16T03:28:33.796460797Z" level=info msg="connecting to shim ab9aa25c9ca176d6d3d1e07cc32a8ba79211fadc965fd39f7473eb262fc7a996" address="unix:///run/containerd/s/d8356615bef8c78ceb72ba915aba9f6f685e1d356682a76c1e63dc0f2371c5e3" protocol=ttrpc version=3 Dec 16 03:28:33.884440 systemd[1]: Started cri-containerd-ab9aa25c9ca176d6d3d1e07cc32a8ba79211fadc965fd39f7473eb262fc7a996.scope - libcontainer container ab9aa25c9ca176d6d3d1e07cc32a8ba79211fadc965fd39f7473eb262fc7a996. Dec 16 03:28:33.947000 audit: BPF prog-id=172 op=LOAD Dec 16 03:28:33.948552 kernel: kauditd_printk_skb: 6 callbacks suppressed Dec 16 03:28:33.948656 kernel: audit: type=1334 audit(1765855713.947:573): prog-id=172 op=LOAD Dec 16 03:28:33.947000 audit[3837]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3304 pid=3837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:33.951883 kernel: audit: type=1300 audit(1765855713.947:573): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3304 pid=3837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:33.947000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162396161323563396361313736643664336431653037636333326138 Dec 16 03:28:33.955459 kernel: audit: type=1327 audit(1765855713.947:573): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162396161323563396361313736643664336431653037636333326138 Dec 16 03:28:33.947000 audit: BPF prog-id=173 op=LOAD Dec 16 03:28:33.958841 kernel: audit: type=1334 audit(1765855713.947:574): prog-id=173 op=LOAD Dec 16 03:28:33.947000 audit[3837]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3304 pid=3837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:33.961988 kernel: audit: type=1300 audit(1765855713.947:574): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3304 pid=3837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:33.947000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162396161323563396361313736643664336431653037636333326138 Dec 16 03:28:33.969032 kernel: audit: type=1327 audit(1765855713.947:574): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162396161323563396361313736643664336431653037636333326138 Dec 16 03:28:33.947000 audit: BPF prog-id=173 op=UNLOAD Dec 16 03:28:33.972997 kernel: audit: type=1334 audit(1765855713.947:575): prog-id=173 op=UNLOAD Dec 16 03:28:33.947000 audit[3837]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3304 pid=3837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:33.947000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162396161323563396361313736643664336431653037636333326138 Dec 16 03:28:33.981325 kernel: audit: type=1300 audit(1765855713.947:575): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3304 pid=3837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:33.981453 kernel: audit: type=1327 audit(1765855713.947:575): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162396161323563396361313736643664336431653037636333326138 Dec 16 03:28:33.947000 audit: BPF prog-id=172 op=UNLOAD Dec 16 03:28:33.947000 audit[3837]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3304 pid=3837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:33.947000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162396161323563396361313736643664336431653037636333326138 Dec 16 03:28:33.947000 audit: BPF prog-id=174 op=LOAD Dec 16 03:28:33.947000 audit[3837]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3304 pid=3837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:33.987789 kernel: audit: type=1334 audit(1765855713.947:576): prog-id=172 op=UNLOAD Dec 16 03:28:33.947000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162396161323563396361313736643664336431653037636333326138 Dec 16 03:28:34.014391 containerd[1596]: time="2025-12-16T03:28:34.014349171Z" level=info msg="StartContainer for \"ab9aa25c9ca176d6d3d1e07cc32a8ba79211fadc965fd39f7473eb262fc7a996\" returns successfully" Dec 16 03:28:34.176429 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 16 03:28:34.176668 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 16 03:28:34.627039 kubelet[2782]: I1216 03:28:34.626934 2782 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cb31b945-d473-43bf-aec2-f1956131d323-whisker-backend-key-pair\") pod \"cb31b945-d473-43bf-aec2-f1956131d323\" (UID: \"cb31b945-d473-43bf-aec2-f1956131d323\") " Dec 16 03:28:34.627039 kubelet[2782]: I1216 03:28:34.626999 2782 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vgzh\" (UniqueName: \"kubernetes.io/projected/cb31b945-d473-43bf-aec2-f1956131d323-kube-api-access-6vgzh\") pod \"cb31b945-d473-43bf-aec2-f1956131d323\" (UID: \"cb31b945-d473-43bf-aec2-f1956131d323\") " Dec 16 03:28:34.630363 kubelet[2782]: I1216 03:28:34.627169 2782 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb31b945-d473-43bf-aec2-f1956131d323-whisker-ca-bundle\") pod \"cb31b945-d473-43bf-aec2-f1956131d323\" (UID: \"cb31b945-d473-43bf-aec2-f1956131d323\") " Dec 16 03:28:34.630363 kubelet[2782]: I1216 03:28:34.628020 2782 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb31b945-d473-43bf-aec2-f1956131d323-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "cb31b945-d473-43bf-aec2-f1956131d323" (UID: "cb31b945-d473-43bf-aec2-f1956131d323"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 03:28:34.638590 kubelet[2782]: I1216 03:28:34.638330 2782 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb31b945-d473-43bf-aec2-f1956131d323-kube-api-access-6vgzh" (OuterVolumeSpecName: "kube-api-access-6vgzh") pod "cb31b945-d473-43bf-aec2-f1956131d323" (UID: "cb31b945-d473-43bf-aec2-f1956131d323"). InnerVolumeSpecName "kube-api-access-6vgzh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 03:28:34.639902 systemd[1]: var-lib-kubelet-pods-cb31b945\x2dd473\x2d43bf\x2daec2\x2df1956131d323-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6vgzh.mount: Deactivated successfully. Dec 16 03:28:34.646550 kubelet[2782]: I1216 03:28:34.646478 2782 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb31b945-d473-43bf-aec2-f1956131d323-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "cb31b945-d473-43bf-aec2-f1956131d323" (UID: "cb31b945-d473-43bf-aec2-f1956131d323"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 16 03:28:34.646780 systemd[1]: var-lib-kubelet-pods-cb31b945\x2dd473\x2d43bf\x2daec2\x2df1956131d323-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 16 03:28:34.699860 kubelet[2782]: E1216 03:28:34.699814 2782 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:28:34.709155 systemd[1]: Removed slice kubepods-besteffort-podcb31b945_d473_43bf_aec2_f1956131d323.slice - libcontainer container kubepods-besteffort-podcb31b945_d473_43bf_aec2_f1956131d323.slice. Dec 16 03:28:34.728626 kubelet[2782]: I1216 03:28:34.728455 2782 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cb31b945-d473-43bf-aec2-f1956131d323-whisker-backend-key-pair\") on node \"ci-4547.0.0-8-fbad3a37dc\" DevicePath \"\"" Dec 16 03:28:34.728626 kubelet[2782]: I1216 03:28:34.728518 2782 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6vgzh\" (UniqueName: \"kubernetes.io/projected/cb31b945-d473-43bf-aec2-f1956131d323-kube-api-access-6vgzh\") on node \"ci-4547.0.0-8-fbad3a37dc\" DevicePath \"\"" Dec 16 03:28:34.728626 kubelet[2782]: I1216 03:28:34.728534 2782 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb31b945-d473-43bf-aec2-f1956131d323-whisker-ca-bundle\") on node \"ci-4547.0.0-8-fbad3a37dc\" DevicePath \"\"" Dec 16 03:28:34.744056 kubelet[2782]: I1216 03:28:34.742639 2782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-4hms4" podStartSLOduration=1.9698697520000001 podStartE2EDuration="21.742609653s" podCreationTimestamp="2025-12-16 03:28:13 +0000 UTC" firstStartedPulling="2025-12-16 03:28:13.87319609 +0000 UTC m=+21.740408432" lastFinishedPulling="2025-12-16 03:28:33.645935993 +0000 UTC m=+41.513148333" observedRunningTime="2025-12-16 03:28:34.737993657 +0000 UTC m=+42.605206034" watchObservedRunningTime="2025-12-16 03:28:34.742609653 +0000 UTC m=+42.609822021" Dec 16 03:28:34.967821 systemd[1]: Created slice kubepods-besteffort-podc131b783_1bf8_4038_b6d0_3485a4a710ea.slice - libcontainer container kubepods-besteffort-podc131b783_1bf8_4038_b6d0_3485a4a710ea.slice. Dec 16 03:28:35.034713 kubelet[2782]: I1216 03:28:35.034445 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m98kl\" (UniqueName: \"kubernetes.io/projected/c131b783-1bf8-4038-b6d0-3485a4a710ea-kube-api-access-m98kl\") pod \"whisker-64588b6f98-lngtn\" (UID: \"c131b783-1bf8-4038-b6d0-3485a4a710ea\") " pod="calico-system/whisker-64588b6f98-lngtn" Dec 16 03:28:35.034713 kubelet[2782]: I1216 03:28:35.034523 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c131b783-1bf8-4038-b6d0-3485a4a710ea-whisker-ca-bundle\") pod \"whisker-64588b6f98-lngtn\" (UID: \"c131b783-1bf8-4038-b6d0-3485a4a710ea\") " pod="calico-system/whisker-64588b6f98-lngtn" Dec 16 03:28:35.034713 kubelet[2782]: I1216 03:28:35.034643 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c131b783-1bf8-4038-b6d0-3485a4a710ea-whisker-backend-key-pair\") pod \"whisker-64588b6f98-lngtn\" (UID: \"c131b783-1bf8-4038-b6d0-3485a4a710ea\") " pod="calico-system/whisker-64588b6f98-lngtn" Dec 16 03:28:35.283032 containerd[1596]: time="2025-12-16T03:28:35.282959489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-64588b6f98-lngtn,Uid:c131b783-1bf8-4038-b6d0-3485a4a710ea,Namespace:calico-system,Attempt:0,}" Dec 16 03:28:35.623043 systemd-networkd[1494]: cali909dd78a447: Link UP Dec 16 03:28:35.623357 systemd-networkd[1494]: cali909dd78a447: Gained carrier Dec 16 03:28:35.674644 containerd[1596]: 2025-12-16 03:28:35.316 [INFO][3935] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 03:28:35.674644 containerd[1596]: 2025-12-16 03:28:35.348 [INFO][3935] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--8--fbad3a37dc-k8s-whisker--64588b6f98--lngtn-eth0 whisker-64588b6f98- calico-system c131b783-1bf8-4038-b6d0-3485a4a710ea 924 0 2025-12-16 03:28:34 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:64588b6f98 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4547.0.0-8-fbad3a37dc whisker-64588b6f98-lngtn eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali909dd78a447 [] [] }} ContainerID="1d5354b7050e04e42809d8a0d1739d9206b11697c62c0294f3050902e590adaa" Namespace="calico-system" Pod="whisker-64588b6f98-lngtn" WorkloadEndpoint="ci--4547.0.0--8--fbad3a37dc-k8s-whisker--64588b6f98--lngtn-" Dec 16 03:28:35.674644 containerd[1596]: 2025-12-16 03:28:35.349 [INFO][3935] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1d5354b7050e04e42809d8a0d1739d9206b11697c62c0294f3050902e590adaa" Namespace="calico-system" Pod="whisker-64588b6f98-lngtn" WorkloadEndpoint="ci--4547.0.0--8--fbad3a37dc-k8s-whisker--64588b6f98--lngtn-eth0" Dec 16 03:28:35.674644 containerd[1596]: 2025-12-16 03:28:35.528 [INFO][3943] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1d5354b7050e04e42809d8a0d1739d9206b11697c62c0294f3050902e590adaa" HandleID="k8s-pod-network.1d5354b7050e04e42809d8a0d1739d9206b11697c62c0294f3050902e590adaa" Workload="ci--4547.0.0--8--fbad3a37dc-k8s-whisker--64588b6f98--lngtn-eth0" Dec 16 03:28:35.675025 containerd[1596]: 2025-12-16 03:28:35.530 [INFO][3943] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1d5354b7050e04e42809d8a0d1739d9206b11697c62c0294f3050902e590adaa" HandleID="k8s-pod-network.1d5354b7050e04e42809d8a0d1739d9206b11697c62c0294f3050902e590adaa" Workload="ci--4547.0.0--8--fbad3a37dc-k8s-whisker--64588b6f98--lngtn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003a0170), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547.0.0-8-fbad3a37dc", "pod":"whisker-64588b6f98-lngtn", "timestamp":"2025-12-16 03:28:35.528564053 +0000 UTC"}, Hostname:"ci-4547.0.0-8-fbad3a37dc", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 03:28:35.675025 containerd[1596]: 2025-12-16 03:28:35.530 [INFO][3943] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 03:28:35.675025 containerd[1596]: 2025-12-16 03:28:35.531 [INFO][3943] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 03:28:35.675025 containerd[1596]: 2025-12-16 03:28:35.532 [INFO][3943] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-8-fbad3a37dc' Dec 16 03:28:35.675025 containerd[1596]: 2025-12-16 03:28:35.549 [INFO][3943] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1d5354b7050e04e42809d8a0d1739d9206b11697c62c0294f3050902e590adaa" host="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:28:35.675025 containerd[1596]: 2025-12-16 03:28:35.564 [INFO][3943] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:28:35.675025 containerd[1596]: 2025-12-16 03:28:35.577 [INFO][3943] ipam/ipam.go 511: Trying affinity for 192.168.122.192/26 host="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:28:35.675025 containerd[1596]: 2025-12-16 03:28:35.580 [INFO][3943] ipam/ipam.go 158: Attempting to load block cidr=192.168.122.192/26 host="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:28:35.675025 containerd[1596]: 2025-12-16 03:28:35.584 [INFO][3943] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.122.192/26 host="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:28:35.675471 containerd[1596]: 2025-12-16 03:28:35.584 [INFO][3943] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.122.192/26 handle="k8s-pod-network.1d5354b7050e04e42809d8a0d1739d9206b11697c62c0294f3050902e590adaa" host="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:28:35.675471 containerd[1596]: 2025-12-16 03:28:35.587 [INFO][3943] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1d5354b7050e04e42809d8a0d1739d9206b11697c62c0294f3050902e590adaa Dec 16 03:28:35.675471 containerd[1596]: 2025-12-16 03:28:35.594 [INFO][3943] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.122.192/26 handle="k8s-pod-network.1d5354b7050e04e42809d8a0d1739d9206b11697c62c0294f3050902e590adaa" host="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:28:35.675471 containerd[1596]: 2025-12-16 03:28:35.602 [INFO][3943] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.122.193/26] block=192.168.122.192/26 handle="k8s-pod-network.1d5354b7050e04e42809d8a0d1739d9206b11697c62c0294f3050902e590adaa" host="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:28:35.675471 containerd[1596]: 2025-12-16 03:28:35.602 [INFO][3943] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.122.193/26] handle="k8s-pod-network.1d5354b7050e04e42809d8a0d1739d9206b11697c62c0294f3050902e590adaa" host="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:28:35.675471 containerd[1596]: 2025-12-16 03:28:35.602 [INFO][3943] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 03:28:35.675471 containerd[1596]: 2025-12-16 03:28:35.602 [INFO][3943] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.122.193/26] IPv6=[] ContainerID="1d5354b7050e04e42809d8a0d1739d9206b11697c62c0294f3050902e590adaa" HandleID="k8s-pod-network.1d5354b7050e04e42809d8a0d1739d9206b11697c62c0294f3050902e590adaa" Workload="ci--4547.0.0--8--fbad3a37dc-k8s-whisker--64588b6f98--lngtn-eth0" Dec 16 03:28:35.675734 containerd[1596]: 2025-12-16 03:28:35.605 [INFO][3935] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1d5354b7050e04e42809d8a0d1739d9206b11697c62c0294f3050902e590adaa" Namespace="calico-system" Pod="whisker-64588b6f98-lngtn" WorkloadEndpoint="ci--4547.0.0--8--fbad3a37dc-k8s-whisker--64588b6f98--lngtn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--8--fbad3a37dc-k8s-whisker--64588b6f98--lngtn-eth0", GenerateName:"whisker-64588b6f98-", Namespace:"calico-system", SelfLink:"", UID:"c131b783-1bf8-4038-b6d0-3485a4a710ea", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 28, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"64588b6f98", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-8-fbad3a37dc", ContainerID:"", Pod:"whisker-64588b6f98-lngtn", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.122.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali909dd78a447", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:28:35.675734 containerd[1596]: 2025-12-16 03:28:35.606 [INFO][3935] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.122.193/32] ContainerID="1d5354b7050e04e42809d8a0d1739d9206b11697c62c0294f3050902e590adaa" Namespace="calico-system" Pod="whisker-64588b6f98-lngtn" WorkloadEndpoint="ci--4547.0.0--8--fbad3a37dc-k8s-whisker--64588b6f98--lngtn-eth0" Dec 16 03:28:35.675880 containerd[1596]: 2025-12-16 03:28:35.606 [INFO][3935] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali909dd78a447 ContainerID="1d5354b7050e04e42809d8a0d1739d9206b11697c62c0294f3050902e590adaa" Namespace="calico-system" Pod="whisker-64588b6f98-lngtn" WorkloadEndpoint="ci--4547.0.0--8--fbad3a37dc-k8s-whisker--64588b6f98--lngtn-eth0" Dec 16 03:28:35.675880 containerd[1596]: 2025-12-16 03:28:35.620 [INFO][3935] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1d5354b7050e04e42809d8a0d1739d9206b11697c62c0294f3050902e590adaa" Namespace="calico-system" Pod="whisker-64588b6f98-lngtn" WorkloadEndpoint="ci--4547.0.0--8--fbad3a37dc-k8s-whisker--64588b6f98--lngtn-eth0" Dec 16 03:28:35.675967 containerd[1596]: 2025-12-16 03:28:35.624 [INFO][3935] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1d5354b7050e04e42809d8a0d1739d9206b11697c62c0294f3050902e590adaa" Namespace="calico-system" Pod="whisker-64588b6f98-lngtn" WorkloadEndpoint="ci--4547.0.0--8--fbad3a37dc-k8s-whisker--64588b6f98--lngtn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--8--fbad3a37dc-k8s-whisker--64588b6f98--lngtn-eth0", GenerateName:"whisker-64588b6f98-", Namespace:"calico-system", SelfLink:"", UID:"c131b783-1bf8-4038-b6d0-3485a4a710ea", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 28, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"64588b6f98", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-8-fbad3a37dc", ContainerID:"1d5354b7050e04e42809d8a0d1739d9206b11697c62c0294f3050902e590adaa", Pod:"whisker-64588b6f98-lngtn", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.122.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali909dd78a447", MAC:"b2:cd:a2:21:13:60", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:28:35.676063 containerd[1596]: 2025-12-16 03:28:35.660 [INFO][3935] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1d5354b7050e04e42809d8a0d1739d9206b11697c62c0294f3050902e590adaa" Namespace="calico-system" Pod="whisker-64588b6f98-lngtn" WorkloadEndpoint="ci--4547.0.0--8--fbad3a37dc-k8s-whisker--64588b6f98--lngtn-eth0" Dec 16 03:28:35.704164 kubelet[2782]: E1216 03:28:35.703936 2782 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:28:35.975720 containerd[1596]: time="2025-12-16T03:28:35.975355409Z" level=info msg="connecting to shim 1d5354b7050e04e42809d8a0d1739d9206b11697c62c0294f3050902e590adaa" address="unix:///run/containerd/s/012a49f092babecd5e78ed472a78801ee768e4f1d893b77956bb554d2c67cfc2" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:28:36.042644 systemd[1]: Started cri-containerd-1d5354b7050e04e42809d8a0d1739d9206b11697c62c0294f3050902e590adaa.scope - libcontainer container 1d5354b7050e04e42809d8a0d1739d9206b11697c62c0294f3050902e590adaa. Dec 16 03:28:36.068000 audit: BPF prog-id=175 op=LOAD Dec 16 03:28:36.069000 audit: BPF prog-id=176 op=LOAD Dec 16 03:28:36.069000 audit[3997]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=3986 pid=3997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:36.069000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164353335346237303530653034653432383039643861306431373339 Dec 16 03:28:36.069000 audit: BPF prog-id=176 op=UNLOAD Dec 16 03:28:36.069000 audit[3997]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3986 pid=3997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:36.069000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164353335346237303530653034653432383039643861306431373339 Dec 16 03:28:36.070000 audit: BPF prog-id=177 op=LOAD Dec 16 03:28:36.070000 audit[3997]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=3986 pid=3997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:36.070000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164353335346237303530653034653432383039643861306431373339 Dec 16 03:28:36.070000 audit: BPF prog-id=178 op=LOAD Dec 16 03:28:36.070000 audit[3997]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=3986 pid=3997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:36.070000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164353335346237303530653034653432383039643861306431373339 Dec 16 03:28:36.071000 audit: BPF prog-id=178 op=UNLOAD Dec 16 03:28:36.071000 audit[3997]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3986 pid=3997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:36.071000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164353335346237303530653034653432383039643861306431373339 Dec 16 03:28:36.071000 audit: BPF prog-id=177 op=UNLOAD Dec 16 03:28:36.071000 audit[3997]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3986 pid=3997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:36.071000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164353335346237303530653034653432383039643861306431373339 Dec 16 03:28:36.071000 audit: BPF prog-id=179 op=LOAD Dec 16 03:28:36.071000 audit[3997]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=3986 pid=3997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:36.071000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164353335346237303530653034653432383039643861306431373339 Dec 16 03:28:36.135291 containerd[1596]: time="2025-12-16T03:28:36.135155127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-64588b6f98-lngtn,Uid:c131b783-1bf8-4038-b6d0-3485a4a710ea,Namespace:calico-system,Attempt:0,} returns sandbox id \"1d5354b7050e04e42809d8a0d1739d9206b11697c62c0294f3050902e590adaa\"" Dec 16 03:28:36.140728 containerd[1596]: time="2025-12-16T03:28:36.140672098Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 03:28:36.407207 kubelet[2782]: E1216 03:28:36.406568 2782 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:28:36.408078 containerd[1596]: time="2025-12-16T03:28:36.408010669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-glpm2,Uid:4c0d8b2a-7296-491d-a267-2b8116b5d172,Namespace:kube-system,Attempt:0,}" Dec 16 03:28:36.415996 kubelet[2782]: I1216 03:28:36.415864 2782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb31b945-d473-43bf-aec2-f1956131d323" path="/var/lib/kubelet/pods/cb31b945-d473-43bf-aec2-f1956131d323/volumes" Dec 16 03:28:36.466973 containerd[1596]: time="2025-12-16T03:28:36.466920982Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:28:36.476221 containerd[1596]: time="2025-12-16T03:28:36.467892426Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 03:28:36.476756 containerd[1596]: time="2025-12-16T03:28:36.470313838Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 03:28:36.479292 kubelet[2782]: E1216 03:28:36.477350 2782 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 03:28:36.479616 kubelet[2782]: E1216 03:28:36.479268 2782 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 03:28:36.479717 kubelet[2782]: E1216 03:28:36.479695 2782 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-64588b6f98-lngtn_calico-system(c131b783-1bf8-4038-b6d0-3485a4a710ea): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 03:28:36.481725 containerd[1596]: time="2025-12-16T03:28:36.481679627Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 03:28:36.714512 systemd-networkd[1494]: cali1e2a1909872: Link UP Dec 16 03:28:36.718812 systemd-networkd[1494]: cali1e2a1909872: Gained carrier Dec 16 03:28:36.764434 containerd[1596]: 2025-12-16 03:28:36.487 [INFO][4110] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 03:28:36.764434 containerd[1596]: 2025-12-16 03:28:36.520 [INFO][4110] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--8--fbad3a37dc-k8s-coredns--66bc5c9577--glpm2-eth0 coredns-66bc5c9577- kube-system 4c0d8b2a-7296-491d-a267-2b8116b5d172 843 0 2025-12-16 03:27:58 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4547.0.0-8-fbad3a37dc coredns-66bc5c9577-glpm2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1e2a1909872 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="d7d9bf373147d59c33068e64353cf629ed7671b02a37e938fb6d6036f3185532" Namespace="kube-system" Pod="coredns-66bc5c9577-glpm2" WorkloadEndpoint="ci--4547.0.0--8--fbad3a37dc-k8s-coredns--66bc5c9577--glpm2-" Dec 16 03:28:36.764434 containerd[1596]: 2025-12-16 03:28:36.521 [INFO][4110] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d7d9bf373147d59c33068e64353cf629ed7671b02a37e938fb6d6036f3185532" Namespace="kube-system" Pod="coredns-66bc5c9577-glpm2" WorkloadEndpoint="ci--4547.0.0--8--fbad3a37dc-k8s-coredns--66bc5c9577--glpm2-eth0" Dec 16 03:28:36.764434 containerd[1596]: 2025-12-16 03:28:36.609 [INFO][4123] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d7d9bf373147d59c33068e64353cf629ed7671b02a37e938fb6d6036f3185532" HandleID="k8s-pod-network.d7d9bf373147d59c33068e64353cf629ed7671b02a37e938fb6d6036f3185532" Workload="ci--4547.0.0--8--fbad3a37dc-k8s-coredns--66bc5c9577--glpm2-eth0" Dec 16 03:28:36.764751 containerd[1596]: 2025-12-16 03:28:36.609 [INFO][4123] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d7d9bf373147d59c33068e64353cf629ed7671b02a37e938fb6d6036f3185532" HandleID="k8s-pod-network.d7d9bf373147d59c33068e64353cf629ed7671b02a37e938fb6d6036f3185532" Workload="ci--4547.0.0--8--fbad3a37dc-k8s-coredns--66bc5c9577--glpm2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0006102a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4547.0.0-8-fbad3a37dc", "pod":"coredns-66bc5c9577-glpm2", "timestamp":"2025-12-16 03:28:36.60929361 +0000 UTC"}, Hostname:"ci-4547.0.0-8-fbad3a37dc", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 03:28:36.764751 containerd[1596]: 2025-12-16 03:28:36.610 [INFO][4123] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 03:28:36.764751 containerd[1596]: 2025-12-16 03:28:36.610 [INFO][4123] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 03:28:36.764751 containerd[1596]: 2025-12-16 03:28:36.610 [INFO][4123] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-8-fbad3a37dc' Dec 16 03:28:36.764751 containerd[1596]: 2025-12-16 03:28:36.632 [INFO][4123] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d7d9bf373147d59c33068e64353cf629ed7671b02a37e938fb6d6036f3185532" host="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:28:36.764751 containerd[1596]: 2025-12-16 03:28:36.642 [INFO][4123] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:28:36.764751 containerd[1596]: 2025-12-16 03:28:36.653 [INFO][4123] ipam/ipam.go 511: Trying affinity for 192.168.122.192/26 host="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:28:36.764751 containerd[1596]: 2025-12-16 03:28:36.657 [INFO][4123] ipam/ipam.go 158: Attempting to load block cidr=192.168.122.192/26 host="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:28:36.764751 containerd[1596]: 2025-12-16 03:28:36.663 [INFO][4123] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.122.192/26 host="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:28:36.767388 containerd[1596]: 2025-12-16 03:28:36.664 [INFO][4123] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.122.192/26 handle="k8s-pod-network.d7d9bf373147d59c33068e64353cf629ed7671b02a37e938fb6d6036f3185532" host="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:28:36.767388 containerd[1596]: 2025-12-16 03:28:36.669 [INFO][4123] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d7d9bf373147d59c33068e64353cf629ed7671b02a37e938fb6d6036f3185532 Dec 16 03:28:36.767388 containerd[1596]: 2025-12-16 03:28:36.677 [INFO][4123] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.122.192/26 handle="k8s-pod-network.d7d9bf373147d59c33068e64353cf629ed7671b02a37e938fb6d6036f3185532" host="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:28:36.767388 containerd[1596]: 2025-12-16 03:28:36.687 [INFO][4123] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.122.194/26] block=192.168.122.192/26 handle="k8s-pod-network.d7d9bf373147d59c33068e64353cf629ed7671b02a37e938fb6d6036f3185532" host="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:28:36.767388 containerd[1596]: 2025-12-16 03:28:36.688 [INFO][4123] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.122.194/26] handle="k8s-pod-network.d7d9bf373147d59c33068e64353cf629ed7671b02a37e938fb6d6036f3185532" host="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:28:36.767388 containerd[1596]: 2025-12-16 03:28:36.688 [INFO][4123] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 03:28:36.767388 containerd[1596]: 2025-12-16 03:28:36.688 [INFO][4123] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.122.194/26] IPv6=[] ContainerID="d7d9bf373147d59c33068e64353cf629ed7671b02a37e938fb6d6036f3185532" HandleID="k8s-pod-network.d7d9bf373147d59c33068e64353cf629ed7671b02a37e938fb6d6036f3185532" Workload="ci--4547.0.0--8--fbad3a37dc-k8s-coredns--66bc5c9577--glpm2-eth0" Dec 16 03:28:36.770463 containerd[1596]: 2025-12-16 03:28:36.696 [INFO][4110] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d7d9bf373147d59c33068e64353cf629ed7671b02a37e938fb6d6036f3185532" Namespace="kube-system" Pod="coredns-66bc5c9577-glpm2" WorkloadEndpoint="ci--4547.0.0--8--fbad3a37dc-k8s-coredns--66bc5c9577--glpm2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--8--fbad3a37dc-k8s-coredns--66bc5c9577--glpm2-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"4c0d8b2a-7296-491d-a267-2b8116b5d172", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 27, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-8-fbad3a37dc", ContainerID:"", Pod:"coredns-66bc5c9577-glpm2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.122.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1e2a1909872", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:28:36.770463 containerd[1596]: 2025-12-16 03:28:36.696 [INFO][4110] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.122.194/32] ContainerID="d7d9bf373147d59c33068e64353cf629ed7671b02a37e938fb6d6036f3185532" Namespace="kube-system" Pod="coredns-66bc5c9577-glpm2" WorkloadEndpoint="ci--4547.0.0--8--fbad3a37dc-k8s-coredns--66bc5c9577--glpm2-eth0" Dec 16 03:28:36.770463 containerd[1596]: 2025-12-16 03:28:36.696 [INFO][4110] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1e2a1909872 ContainerID="d7d9bf373147d59c33068e64353cf629ed7671b02a37e938fb6d6036f3185532" Namespace="kube-system" Pod="coredns-66bc5c9577-glpm2" WorkloadEndpoint="ci--4547.0.0--8--fbad3a37dc-k8s-coredns--66bc5c9577--glpm2-eth0" Dec 16 03:28:36.770463 containerd[1596]: 2025-12-16 03:28:36.721 [INFO][4110] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d7d9bf373147d59c33068e64353cf629ed7671b02a37e938fb6d6036f3185532" Namespace="kube-system" Pod="coredns-66bc5c9577-glpm2" WorkloadEndpoint="ci--4547.0.0--8--fbad3a37dc-k8s-coredns--66bc5c9577--glpm2-eth0" Dec 16 03:28:36.770463 containerd[1596]: 2025-12-16 03:28:36.724 [INFO][4110] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d7d9bf373147d59c33068e64353cf629ed7671b02a37e938fb6d6036f3185532" Namespace="kube-system" Pod="coredns-66bc5c9577-glpm2" WorkloadEndpoint="ci--4547.0.0--8--fbad3a37dc-k8s-coredns--66bc5c9577--glpm2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--8--fbad3a37dc-k8s-coredns--66bc5c9577--glpm2-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"4c0d8b2a-7296-491d-a267-2b8116b5d172", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 27, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-8-fbad3a37dc", ContainerID:"d7d9bf373147d59c33068e64353cf629ed7671b02a37e938fb6d6036f3185532", Pod:"coredns-66bc5c9577-glpm2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.122.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1e2a1909872", MAC:"52:a6:b3:bf:fc:a6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:28:36.770762 containerd[1596]: 2025-12-16 03:28:36.754 [INFO][4110] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d7d9bf373147d59c33068e64353cf629ed7671b02a37e938fb6d6036f3185532" Namespace="kube-system" Pod="coredns-66bc5c9577-glpm2" WorkloadEndpoint="ci--4547.0.0--8--fbad3a37dc-k8s-coredns--66bc5c9577--glpm2-eth0" Dec 16 03:28:36.855753 containerd[1596]: time="2025-12-16T03:28:36.855690476Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:28:36.860226 containerd[1596]: time="2025-12-16T03:28:36.860133889Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 03:28:36.860801 containerd[1596]: time="2025-12-16T03:28:36.860554861Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 03:28:36.861581 kubelet[2782]: E1216 03:28:36.861360 2782 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 03:28:36.861581 kubelet[2782]: E1216 03:28:36.861428 2782 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 03:28:36.861581 kubelet[2782]: E1216 03:28:36.861521 2782 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-64588b6f98-lngtn_calico-system(c131b783-1bf8-4038-b6d0-3485a4a710ea): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 03:28:36.862745 kubelet[2782]: E1216 03:28:36.861568 2782 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64588b6f98-lngtn" podUID="c131b783-1bf8-4038-b6d0-3485a4a710ea" Dec 16 03:28:36.864384 containerd[1596]: time="2025-12-16T03:28:36.864303612Z" level=info msg="connecting to shim d7d9bf373147d59c33068e64353cf629ed7671b02a37e938fb6d6036f3185532" address="unix:///run/containerd/s/dd1ede62a8c49cfe369a3996976e994a2e836d53619a29f7781fc6e808a4fcb7" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:28:36.940866 systemd[1]: Started cri-containerd-d7d9bf373147d59c33068e64353cf629ed7671b02a37e938fb6d6036f3185532.scope - libcontainer container d7d9bf373147d59c33068e64353cf629ed7671b02a37e938fb6d6036f3185532. Dec 16 03:28:36.977000 audit: BPF prog-id=180 op=LOAD Dec 16 03:28:36.981000 audit: BPF prog-id=181 op=LOAD Dec 16 03:28:36.981000 audit[4160]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000206238 a2=98 a3=0 items=0 ppid=4150 pid=4160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:36.981000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437643962663337333134376435396333333036386536343335336366 Dec 16 03:28:36.981000 audit: BPF prog-id=181 op=UNLOAD Dec 16 03:28:36.981000 audit[4160]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4150 pid=4160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:36.981000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437643962663337333134376435396333333036386536343335336366 Dec 16 03:28:36.981000 audit: BPF prog-id=182 op=LOAD Dec 16 03:28:36.981000 audit[4160]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000206488 a2=98 a3=0 items=0 ppid=4150 pid=4160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:36.981000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437643962663337333134376435396333333036386536343335336366 Dec 16 03:28:36.982000 audit: BPF prog-id=183 op=LOAD Dec 16 03:28:36.982000 audit[4160]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000206218 a2=98 a3=0 items=0 ppid=4150 pid=4160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:36.982000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437643962663337333134376435396333333036386536343335336366 Dec 16 03:28:36.982000 audit: BPF prog-id=183 op=UNLOAD Dec 16 03:28:36.982000 audit[4160]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4150 pid=4160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:36.982000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437643962663337333134376435396333333036386536343335336366 Dec 16 03:28:36.982000 audit: BPF prog-id=182 op=UNLOAD Dec 16 03:28:36.982000 audit[4160]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4150 pid=4160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:36.982000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437643962663337333134376435396333333036386536343335336366 Dec 16 03:28:36.982000 audit: BPF prog-id=184 op=LOAD Dec 16 03:28:36.982000 audit[4160]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0002066e8 a2=98 a3=0 items=0 ppid=4150 pid=4160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:36.982000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437643962663337333134376435396333333036386536343335336366 Dec 16 03:28:37.075655 containerd[1596]: time="2025-12-16T03:28:37.075599463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-glpm2,Uid:4c0d8b2a-7296-491d-a267-2b8116b5d172,Namespace:kube-system,Attempt:0,} returns sandbox id \"d7d9bf373147d59c33068e64353cf629ed7671b02a37e938fb6d6036f3185532\"" Dec 16 03:28:37.080105 kubelet[2782]: E1216 03:28:37.079798 2782 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:28:37.106000 audit: BPF prog-id=185 op=LOAD Dec 16 03:28:37.106000 audit[4212]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffff7673950 a2=98 a3=1fffffffffffffff items=0 ppid=4042 pid=4212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:37.106000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 03:28:37.108000 audit: BPF prog-id=185 op=UNLOAD Dec 16 03:28:37.108000 audit[4212]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffff7673920 a3=0 items=0 ppid=4042 pid=4212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:37.108000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 03:28:37.109000 audit: BPF prog-id=186 op=LOAD Dec 16 03:28:37.109000 audit[4212]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffff7673830 a2=94 a3=3 items=0 ppid=4042 pid=4212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:37.109000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 03:28:37.110000 audit: BPF prog-id=186 op=UNLOAD Dec 16 03:28:37.110000 audit[4212]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffff7673830 a2=94 a3=3 items=0 ppid=4042 pid=4212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:37.110000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 03:28:37.110000 audit: BPF prog-id=187 op=LOAD Dec 16 03:28:37.110000 audit[4212]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffff7673870 a2=94 a3=7ffff7673a50 items=0 ppid=4042 pid=4212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:37.110000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 03:28:37.110000 audit: BPF prog-id=187 op=UNLOAD Dec 16 03:28:37.110000 audit[4212]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffff7673870 a2=94 a3=7ffff7673a50 items=0 ppid=4042 pid=4212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:37.110000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 03:28:37.116000 audit: BPF prog-id=188 op=LOAD Dec 16 03:28:37.116000 audit[4215]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcb5780700 a2=98 a3=3 items=0 ppid=4042 pid=4215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:37.116000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:28:37.116000 audit: BPF prog-id=188 op=UNLOAD Dec 16 03:28:37.116000 audit[4215]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffcb57806d0 a3=0 items=0 ppid=4042 pid=4215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:37.116000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:28:37.117000 audit: BPF prog-id=189 op=LOAD Dec 16 03:28:37.117000 audit[4215]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffcb57804f0 a2=94 a3=54428f items=0 ppid=4042 pid=4215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:37.117000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:28:37.118000 audit: BPF prog-id=189 op=UNLOAD Dec 16 03:28:37.118000 audit[4215]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffcb57804f0 a2=94 a3=54428f items=0 ppid=4042 pid=4215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:37.118000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:28:37.118000 audit: BPF prog-id=190 op=LOAD Dec 16 03:28:37.118000 audit[4215]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffcb5780520 a2=94 a3=2 items=0 ppid=4042 pid=4215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:37.118000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:28:37.118000 audit: BPF prog-id=190 op=UNLOAD Dec 16 03:28:37.118000 audit[4215]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffcb5780520 a2=0 a3=2 items=0 ppid=4042 pid=4215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:37.118000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:28:37.124369 containerd[1596]: time="2025-12-16T03:28:37.124313062Z" level=info msg="CreateContainer within sandbox \"d7d9bf373147d59c33068e64353cf629ed7671b02a37e938fb6d6036f3185532\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 03:28:37.155209 containerd[1596]: time="2025-12-16T03:28:37.153627542Z" level=info msg="Container ed69ca609be780c9314db5180816122116f364a91bd678133a7d5f41ec60e1c8: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:28:37.165831 containerd[1596]: time="2025-12-16T03:28:37.165605343Z" level=info msg="CreateContainer within sandbox \"d7d9bf373147d59c33068e64353cf629ed7671b02a37e938fb6d6036f3185532\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ed69ca609be780c9314db5180816122116f364a91bd678133a7d5f41ec60e1c8\"" Dec 16 03:28:37.167500 containerd[1596]: time="2025-12-16T03:28:37.167454547Z" level=info msg="StartContainer for \"ed69ca609be780c9314db5180816122116f364a91bd678133a7d5f41ec60e1c8\"" Dec 16 03:28:37.170198 containerd[1596]: time="2025-12-16T03:28:37.170153403Z" level=info msg="connecting to shim ed69ca609be780c9314db5180816122116f364a91bd678133a7d5f41ec60e1c8" address="unix:///run/containerd/s/dd1ede62a8c49cfe369a3996976e994a2e836d53619a29f7781fc6e808a4fcb7" protocol=ttrpc version=3 Dec 16 03:28:37.206698 systemd[1]: Started cri-containerd-ed69ca609be780c9314db5180816122116f364a91bd678133a7d5f41ec60e1c8.scope - libcontainer container ed69ca609be780c9314db5180816122116f364a91bd678133a7d5f41ec60e1c8. Dec 16 03:28:37.236000 audit: BPF prog-id=191 op=LOAD Dec 16 03:28:37.238000 audit: BPF prog-id=192 op=LOAD Dec 16 03:28:37.238000 audit[4216]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4150 pid=4216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:37.238000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564363963613630396265373830633933313464623531383038313631 Dec 16 03:28:37.238000 audit: BPF prog-id=192 op=UNLOAD Dec 16 03:28:37.238000 audit[4216]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4150 pid=4216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:37.238000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564363963613630396265373830633933313464623531383038313631 Dec 16 03:28:37.239000 audit: BPF prog-id=193 op=LOAD Dec 16 03:28:37.239000 audit[4216]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4150 pid=4216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:37.239000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564363963613630396265373830633933313464623531383038313631 Dec 16 03:28:37.239000 audit: BPF prog-id=194 op=LOAD Dec 16 03:28:37.239000 audit[4216]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4150 pid=4216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:37.239000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564363963613630396265373830633933313464623531383038313631 Dec 16 03:28:37.239000 audit: BPF prog-id=194 op=UNLOAD Dec 16 03:28:37.239000 audit[4216]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4150 pid=4216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:37.239000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564363963613630396265373830633933313464623531383038313631 Dec 16 03:28:37.239000 audit: BPF prog-id=193 op=UNLOAD Dec 16 03:28:37.239000 audit[4216]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4150 pid=4216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:37.239000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564363963613630396265373830633933313464623531383038313631 Dec 16 03:28:37.239000 audit: BPF prog-id=195 op=LOAD Dec 16 03:28:37.239000 audit[4216]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4150 pid=4216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:37.239000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564363963613630396265373830633933313464623531383038313631 Dec 16 03:28:37.275646 containerd[1596]: time="2025-12-16T03:28:37.275505090Z" level=info msg="StartContainer for \"ed69ca609be780c9314db5180816122116f364a91bd678133a7d5f41ec60e1c8\" returns successfully" Dec 16 03:28:37.403369 containerd[1596]: time="2025-12-16T03:28:37.403286513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d6f5f89f8-mz87q,Uid:34973ba8-5ee2-4cc3-a8d0-65270e641be0,Namespace:calico-system,Attempt:0,}" Dec 16 03:28:37.404990 containerd[1596]: time="2025-12-16T03:28:37.404886824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-699cff98b-z4b6s,Uid:fb012d08-2ffd-46d4-bf1b-e4743471acfd,Namespace:calico-apiserver,Attempt:0,}" Dec 16 03:28:37.575899 systemd-networkd[1494]: cali909dd78a447: Gained IPv6LL Dec 16 03:28:37.644000 audit: BPF prog-id=196 op=LOAD Dec 16 03:28:37.644000 audit[4215]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffcb57803e0 a2=94 a3=1 items=0 ppid=4042 pid=4215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:37.644000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:28:37.645000 audit: BPF prog-id=196 op=UNLOAD Dec 16 03:28:37.645000 audit[4215]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffcb57803e0 a2=94 a3=1 items=0 ppid=4042 pid=4215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:37.645000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:28:37.674000 audit: BPF prog-id=197 op=LOAD Dec 16 03:28:37.674000 audit[4215]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffcb57803d0 a2=94 a3=4 items=0 ppid=4042 pid=4215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:37.674000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:28:37.674000 audit: BPF prog-id=197 op=UNLOAD Dec 16 03:28:37.674000 audit[4215]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffcb57803d0 a2=0 a3=4 items=0 ppid=4042 pid=4215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:37.674000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:28:37.675000 audit: BPF prog-id=198 op=LOAD Dec 16 03:28:37.675000 audit[4215]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffcb5780230 a2=94 a3=5 items=0 ppid=4042 pid=4215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:37.675000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:28:37.675000 audit: BPF prog-id=198 op=UNLOAD Dec 16 03:28:37.675000 audit[4215]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffcb5780230 a2=0 a3=5 items=0 ppid=4042 pid=4215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:37.675000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:28:37.676000 audit: BPF prog-id=199 op=LOAD Dec 16 03:28:37.676000 audit[4215]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffcb5780450 a2=94 a3=6 items=0 ppid=4042 pid=4215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:37.676000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:28:37.676000 audit: BPF prog-id=199 op=UNLOAD Dec 16 03:28:37.676000 audit[4215]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffcb5780450 a2=0 a3=6 items=0 ppid=4042 pid=4215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:37.676000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:28:37.676000 audit: BPF prog-id=200 op=LOAD Dec 16 03:28:37.676000 audit[4215]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffcb577fc00 a2=94 a3=88 items=0 ppid=4042 pid=4215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:37.676000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:28:37.677000 audit: BPF prog-id=201 op=LOAD Dec 16 03:28:37.677000 audit[4215]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffcb577fa80 a2=94 a3=2 items=0 ppid=4042 pid=4215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:37.677000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:28:37.677000 audit: BPF prog-id=201 op=UNLOAD Dec 16 03:28:37.677000 audit[4215]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffcb577fab0 a2=0 a3=7ffcb577fbb0 items=0 ppid=4042 pid=4215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:37.677000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:28:37.677000 audit: BPF prog-id=200 op=UNLOAD Dec 16 03:28:37.677000 audit[4215]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=1a970d10 a2=0 a3=caee354c5dd1edc7 items=0 ppid=4042 pid=4215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:37.677000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:28:37.699000 audit: BPF prog-id=202 op=LOAD Dec 16 03:28:37.699000 audit[4290]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe486e3130 a2=98 a3=1999999999999999 items=0 ppid=4042 pid=4290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:37.699000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 03:28:37.699000 audit: BPF prog-id=202 op=UNLOAD Dec 16 03:28:37.699000 audit[4290]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe486e3100 a3=0 items=0 ppid=4042 pid=4290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:37.699000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 03:28:37.699000 audit: BPF prog-id=203 op=LOAD Dec 16 03:28:37.699000 audit[4290]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe486e3010 a2=94 a3=ffff items=0 ppid=4042 pid=4290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:37.699000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 03:28:37.699000 audit: BPF prog-id=203 op=UNLOAD Dec 16 03:28:37.699000 audit[4290]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe486e3010 a2=94 a3=ffff items=0 ppid=4042 pid=4290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:37.699000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 03:28:37.699000 audit: BPF prog-id=204 op=LOAD Dec 16 03:28:37.699000 audit[4290]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe486e3050 a2=94 a3=7ffe486e3230 items=0 ppid=4042 pid=4290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:37.699000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 03:28:37.699000 audit: BPF prog-id=204 op=UNLOAD Dec 16 03:28:37.699000 audit[4290]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe486e3050 a2=94 a3=7ffe486e3230 items=0 ppid=4042 pid=4290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:37.699000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 03:28:37.728598 systemd-networkd[1494]: cali1b6f8bf0bbd: Link UP Dec 16 03:28:37.740541 systemd-networkd[1494]: cali1b6f8bf0bbd: Gained carrier Dec 16 03:28:37.768607 kubelet[2782]: E1216 03:28:37.768520 2782 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:28:37.811145 containerd[1596]: 2025-12-16 03:28:37.510 [INFO][4248] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--8--fbad3a37dc-k8s-calico--apiserver--699cff98b--z4b6s-eth0 calico-apiserver-699cff98b- calico-apiserver fb012d08-2ffd-46d4-bf1b-e4743471acfd 849 0 2025-12-16 03:28:07 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:699cff98b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4547.0.0-8-fbad3a37dc calico-apiserver-699cff98b-z4b6s eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1b6f8bf0bbd [] [] }} ContainerID="384d9d59cb89e7428f03725af23eb94ef5907fcd085f63085de543ae5fd67715" Namespace="calico-apiserver" Pod="calico-apiserver-699cff98b-z4b6s" WorkloadEndpoint="ci--4547.0.0--8--fbad3a37dc-k8s-calico--apiserver--699cff98b--z4b6s-" Dec 16 03:28:37.811145 containerd[1596]: 2025-12-16 03:28:37.510 [INFO][4248] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="384d9d59cb89e7428f03725af23eb94ef5907fcd085f63085de543ae5fd67715" Namespace="calico-apiserver" Pod="calico-apiserver-699cff98b-z4b6s" WorkloadEndpoint="ci--4547.0.0--8--fbad3a37dc-k8s-calico--apiserver--699cff98b--z4b6s-eth0" Dec 16 03:28:37.811145 containerd[1596]: 2025-12-16 03:28:37.627 [INFO][4275] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="384d9d59cb89e7428f03725af23eb94ef5907fcd085f63085de543ae5fd67715" HandleID="k8s-pod-network.384d9d59cb89e7428f03725af23eb94ef5907fcd085f63085de543ae5fd67715" Workload="ci--4547.0.0--8--fbad3a37dc-k8s-calico--apiserver--699cff98b--z4b6s-eth0" Dec 16 03:28:37.811145 containerd[1596]: 2025-12-16 03:28:37.628 [INFO][4275] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="384d9d59cb89e7428f03725af23eb94ef5907fcd085f63085de543ae5fd67715" HandleID="k8s-pod-network.384d9d59cb89e7428f03725af23eb94ef5907fcd085f63085de543ae5fd67715" Workload="ci--4547.0.0--8--fbad3a37dc-k8s-calico--apiserver--699cff98b--z4b6s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e0d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4547.0.0-8-fbad3a37dc", "pod":"calico-apiserver-699cff98b-z4b6s", "timestamp":"2025-12-16 03:28:37.627569949 +0000 UTC"}, Hostname:"ci-4547.0.0-8-fbad3a37dc", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 03:28:37.811145 containerd[1596]: 2025-12-16 03:28:37.628 [INFO][4275] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 03:28:37.811145 containerd[1596]: 2025-12-16 03:28:37.628 [INFO][4275] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 03:28:37.811145 containerd[1596]: 2025-12-16 03:28:37.628 [INFO][4275] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-8-fbad3a37dc' Dec 16 03:28:37.811145 containerd[1596]: 2025-12-16 03:28:37.649 [INFO][4275] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.384d9d59cb89e7428f03725af23eb94ef5907fcd085f63085de543ae5fd67715" host="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:28:37.811145 containerd[1596]: 2025-12-16 03:28:37.660 [INFO][4275] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:28:37.811145 containerd[1596]: 2025-12-16 03:28:37.671 [INFO][4275] ipam/ipam.go 511: Trying affinity for 192.168.122.192/26 host="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:28:37.811145 containerd[1596]: 2025-12-16 03:28:37.677 [INFO][4275] ipam/ipam.go 158: Attempting to load block cidr=192.168.122.192/26 host="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:28:37.811145 containerd[1596]: 2025-12-16 03:28:37.683 [INFO][4275] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.122.192/26 host="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:28:37.811145 containerd[1596]: 2025-12-16 03:28:37.683 [INFO][4275] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.122.192/26 handle="k8s-pod-network.384d9d59cb89e7428f03725af23eb94ef5907fcd085f63085de543ae5fd67715" host="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:28:37.811145 containerd[1596]: 2025-12-16 03:28:37.686 [INFO][4275] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.384d9d59cb89e7428f03725af23eb94ef5907fcd085f63085de543ae5fd67715 Dec 16 03:28:37.811145 containerd[1596]: 2025-12-16 03:28:37.694 [INFO][4275] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.122.192/26 handle="k8s-pod-network.384d9d59cb89e7428f03725af23eb94ef5907fcd085f63085de543ae5fd67715" host="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:28:37.811145 containerd[1596]: 2025-12-16 03:28:37.704 [INFO][4275] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.122.195/26] block=192.168.122.192/26 handle="k8s-pod-network.384d9d59cb89e7428f03725af23eb94ef5907fcd085f63085de543ae5fd67715" host="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:28:37.811145 containerd[1596]: 2025-12-16 03:28:37.704 [INFO][4275] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.122.195/26] handle="k8s-pod-network.384d9d59cb89e7428f03725af23eb94ef5907fcd085f63085de543ae5fd67715" host="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:28:37.811145 containerd[1596]: 2025-12-16 03:28:37.704 [INFO][4275] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 03:28:37.811145 containerd[1596]: 2025-12-16 03:28:37.704 [INFO][4275] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.122.195/26] IPv6=[] ContainerID="384d9d59cb89e7428f03725af23eb94ef5907fcd085f63085de543ae5fd67715" HandleID="k8s-pod-network.384d9d59cb89e7428f03725af23eb94ef5907fcd085f63085de543ae5fd67715" Workload="ci--4547.0.0--8--fbad3a37dc-k8s-calico--apiserver--699cff98b--z4b6s-eth0" Dec 16 03:28:37.822960 containerd[1596]: 2025-12-16 03:28:37.711 [INFO][4248] cni-plugin/k8s.go 418: Populated endpoint ContainerID="384d9d59cb89e7428f03725af23eb94ef5907fcd085f63085de543ae5fd67715" Namespace="calico-apiserver" Pod="calico-apiserver-699cff98b-z4b6s" WorkloadEndpoint="ci--4547.0.0--8--fbad3a37dc-k8s-calico--apiserver--699cff98b--z4b6s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--8--fbad3a37dc-k8s-calico--apiserver--699cff98b--z4b6s-eth0", GenerateName:"calico-apiserver-699cff98b-", Namespace:"calico-apiserver", SelfLink:"", UID:"fb012d08-2ffd-46d4-bf1b-e4743471acfd", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 28, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"699cff98b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-8-fbad3a37dc", ContainerID:"", Pod:"calico-apiserver-699cff98b-z4b6s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.122.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1b6f8bf0bbd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:28:37.822960 containerd[1596]: 2025-12-16 03:28:37.712 [INFO][4248] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.122.195/32] ContainerID="384d9d59cb89e7428f03725af23eb94ef5907fcd085f63085de543ae5fd67715" Namespace="calico-apiserver" Pod="calico-apiserver-699cff98b-z4b6s" WorkloadEndpoint="ci--4547.0.0--8--fbad3a37dc-k8s-calico--apiserver--699cff98b--z4b6s-eth0" Dec 16 03:28:37.822960 containerd[1596]: 2025-12-16 03:28:37.712 [INFO][4248] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1b6f8bf0bbd ContainerID="384d9d59cb89e7428f03725af23eb94ef5907fcd085f63085de543ae5fd67715" Namespace="calico-apiserver" Pod="calico-apiserver-699cff98b-z4b6s" WorkloadEndpoint="ci--4547.0.0--8--fbad3a37dc-k8s-calico--apiserver--699cff98b--z4b6s-eth0" Dec 16 03:28:37.822960 containerd[1596]: 2025-12-16 03:28:37.751 [INFO][4248] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="384d9d59cb89e7428f03725af23eb94ef5907fcd085f63085de543ae5fd67715" Namespace="calico-apiserver" Pod="calico-apiserver-699cff98b-z4b6s" WorkloadEndpoint="ci--4547.0.0--8--fbad3a37dc-k8s-calico--apiserver--699cff98b--z4b6s-eth0" Dec 16 03:28:37.822960 containerd[1596]: 2025-12-16 03:28:37.755 [INFO][4248] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="384d9d59cb89e7428f03725af23eb94ef5907fcd085f63085de543ae5fd67715" Namespace="calico-apiserver" Pod="calico-apiserver-699cff98b-z4b6s" WorkloadEndpoint="ci--4547.0.0--8--fbad3a37dc-k8s-calico--apiserver--699cff98b--z4b6s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--8--fbad3a37dc-k8s-calico--apiserver--699cff98b--z4b6s-eth0", GenerateName:"calico-apiserver-699cff98b-", Namespace:"calico-apiserver", SelfLink:"", UID:"fb012d08-2ffd-46d4-bf1b-e4743471acfd", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 28, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"699cff98b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-8-fbad3a37dc", ContainerID:"384d9d59cb89e7428f03725af23eb94ef5907fcd085f63085de543ae5fd67715", Pod:"calico-apiserver-699cff98b-z4b6s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.122.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1b6f8bf0bbd", MAC:"fe:52:92:5a:d4:ed", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:28:37.822960 containerd[1596]: 2025-12-16 03:28:37.796 [INFO][4248] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="384d9d59cb89e7428f03725af23eb94ef5907fcd085f63085de543ae5fd67715" Namespace="calico-apiserver" Pod="calico-apiserver-699cff98b-z4b6s" WorkloadEndpoint="ci--4547.0.0--8--fbad3a37dc-k8s-calico--apiserver--699cff98b--z4b6s-eth0" Dec 16 03:28:37.854698 kubelet[2782]: E1216 03:28:37.853854 2782 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64588b6f98-lngtn" podUID="c131b783-1bf8-4038-b6d0-3485a4a710ea" Dec 16 03:28:37.952976 systemd-networkd[1494]: vxlan.calico: Link UP Dec 16 03:28:37.952988 systemd-networkd[1494]: vxlan.calico: Gained carrier Dec 16 03:28:37.953197 systemd-networkd[1494]: cali6c23ab006a6: Link UP Dec 16 03:28:37.953388 systemd-networkd[1494]: cali6c23ab006a6: Gained carrier Dec 16 03:28:38.007870 containerd[1596]: 2025-12-16 03:28:37.510 [INFO][4246] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--8--fbad3a37dc-k8s-calico--kube--controllers--d6f5f89f8--mz87q-eth0 calico-kube-controllers-d6f5f89f8- calico-system 34973ba8-5ee2-4cc3-a8d0-65270e641be0 851 0 2025-12-16 03:28:13 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:d6f5f89f8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4547.0.0-8-fbad3a37dc calico-kube-controllers-d6f5f89f8-mz87q eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali6c23ab006a6 [] [] }} ContainerID="c466ec50e6b380ca21d2885412dc606d207389b26a037e9ecdea58541e4ea407" Namespace="calico-system" Pod="calico-kube-controllers-d6f5f89f8-mz87q" WorkloadEndpoint="ci--4547.0.0--8--fbad3a37dc-k8s-calico--kube--controllers--d6f5f89f8--mz87q-" Dec 16 03:28:38.007870 containerd[1596]: 2025-12-16 03:28:37.510 [INFO][4246] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c466ec50e6b380ca21d2885412dc606d207389b26a037e9ecdea58541e4ea407" Namespace="calico-system" Pod="calico-kube-controllers-d6f5f89f8-mz87q" WorkloadEndpoint="ci--4547.0.0--8--fbad3a37dc-k8s-calico--kube--controllers--d6f5f89f8--mz87q-eth0" Dec 16 03:28:38.007870 containerd[1596]: 2025-12-16 03:28:37.661 [INFO][4280] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c466ec50e6b380ca21d2885412dc606d207389b26a037e9ecdea58541e4ea407" HandleID="k8s-pod-network.c466ec50e6b380ca21d2885412dc606d207389b26a037e9ecdea58541e4ea407" Workload="ci--4547.0.0--8--fbad3a37dc-k8s-calico--kube--controllers--d6f5f89f8--mz87q-eth0" Dec 16 03:28:38.007870 containerd[1596]: 2025-12-16 03:28:37.663 [INFO][4280] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c466ec50e6b380ca21d2885412dc606d207389b26a037e9ecdea58541e4ea407" HandleID="k8s-pod-network.c466ec50e6b380ca21d2885412dc606d207389b26a037e9ecdea58541e4ea407" Workload="ci--4547.0.0--8--fbad3a37dc-k8s-calico--kube--controllers--d6f5f89f8--mz87q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5660), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547.0.0-8-fbad3a37dc", "pod":"calico-kube-controllers-d6f5f89f8-mz87q", "timestamp":"2025-12-16 03:28:37.661854833 +0000 UTC"}, Hostname:"ci-4547.0.0-8-fbad3a37dc", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 03:28:38.007870 containerd[1596]: 2025-12-16 03:28:37.664 [INFO][4280] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 03:28:38.007870 containerd[1596]: 2025-12-16 03:28:37.705 [INFO][4280] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 03:28:38.007870 containerd[1596]: 2025-12-16 03:28:37.705 [INFO][4280] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-8-fbad3a37dc' Dec 16 03:28:38.007870 containerd[1596]: 2025-12-16 03:28:37.751 [INFO][4280] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c466ec50e6b380ca21d2885412dc606d207389b26a037e9ecdea58541e4ea407" host="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:28:38.007870 containerd[1596]: 2025-12-16 03:28:37.794 [INFO][4280] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:28:38.007870 containerd[1596]: 2025-12-16 03:28:37.832 [INFO][4280] ipam/ipam.go 511: Trying affinity for 192.168.122.192/26 host="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:28:38.007870 containerd[1596]: 2025-12-16 03:28:37.841 [INFO][4280] ipam/ipam.go 158: Attempting to load block cidr=192.168.122.192/26 host="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:28:38.007870 containerd[1596]: 2025-12-16 03:28:37.848 [INFO][4280] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.122.192/26 host="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:28:38.007870 containerd[1596]: 2025-12-16 03:28:37.851 [INFO][4280] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.122.192/26 handle="k8s-pod-network.c466ec50e6b380ca21d2885412dc606d207389b26a037e9ecdea58541e4ea407" host="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:28:38.007870 containerd[1596]: 2025-12-16 03:28:37.865 [INFO][4280] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c466ec50e6b380ca21d2885412dc606d207389b26a037e9ecdea58541e4ea407 Dec 16 03:28:38.007870 containerd[1596]: 2025-12-16 03:28:37.878 [INFO][4280] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.122.192/26 handle="k8s-pod-network.c466ec50e6b380ca21d2885412dc606d207389b26a037e9ecdea58541e4ea407" host="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:28:38.007870 containerd[1596]: 2025-12-16 03:28:37.894 [INFO][4280] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.122.196/26] block=192.168.122.192/26 handle="k8s-pod-network.c466ec50e6b380ca21d2885412dc606d207389b26a037e9ecdea58541e4ea407" host="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:28:38.007870 containerd[1596]: 2025-12-16 03:28:37.897 [INFO][4280] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.122.196/26] handle="k8s-pod-network.c466ec50e6b380ca21d2885412dc606d207389b26a037e9ecdea58541e4ea407" host="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:28:38.007870 containerd[1596]: 2025-12-16 03:28:37.897 [INFO][4280] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 03:28:38.007870 containerd[1596]: 2025-12-16 03:28:37.897 [INFO][4280] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.122.196/26] IPv6=[] ContainerID="c466ec50e6b380ca21d2885412dc606d207389b26a037e9ecdea58541e4ea407" HandleID="k8s-pod-network.c466ec50e6b380ca21d2885412dc606d207389b26a037e9ecdea58541e4ea407" Workload="ci--4547.0.0--8--fbad3a37dc-k8s-calico--kube--controllers--d6f5f89f8--mz87q-eth0" Dec 16 03:28:38.008995 containerd[1596]: 2025-12-16 03:28:37.936 [INFO][4246] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c466ec50e6b380ca21d2885412dc606d207389b26a037e9ecdea58541e4ea407" Namespace="calico-system" Pod="calico-kube-controllers-d6f5f89f8-mz87q" WorkloadEndpoint="ci--4547.0.0--8--fbad3a37dc-k8s-calico--kube--controllers--d6f5f89f8--mz87q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--8--fbad3a37dc-k8s-calico--kube--controllers--d6f5f89f8--mz87q-eth0", GenerateName:"calico-kube-controllers-d6f5f89f8-", Namespace:"calico-system", SelfLink:"", UID:"34973ba8-5ee2-4cc3-a8d0-65270e641be0", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 28, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"d6f5f89f8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-8-fbad3a37dc", ContainerID:"", Pod:"calico-kube-controllers-d6f5f89f8-mz87q", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.122.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6c23ab006a6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:28:38.008995 containerd[1596]: 2025-12-16 03:28:37.936 [INFO][4246] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.122.196/32] ContainerID="c466ec50e6b380ca21d2885412dc606d207389b26a037e9ecdea58541e4ea407" Namespace="calico-system" Pod="calico-kube-controllers-d6f5f89f8-mz87q" WorkloadEndpoint="ci--4547.0.0--8--fbad3a37dc-k8s-calico--kube--controllers--d6f5f89f8--mz87q-eth0" Dec 16 03:28:38.008995 containerd[1596]: 2025-12-16 03:28:37.939 [INFO][4246] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6c23ab006a6 ContainerID="c466ec50e6b380ca21d2885412dc606d207389b26a037e9ecdea58541e4ea407" Namespace="calico-system" Pod="calico-kube-controllers-d6f5f89f8-mz87q" WorkloadEndpoint="ci--4547.0.0--8--fbad3a37dc-k8s-calico--kube--controllers--d6f5f89f8--mz87q-eth0" Dec 16 03:28:38.008995 containerd[1596]: 2025-12-16 03:28:37.948 [INFO][4246] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c466ec50e6b380ca21d2885412dc606d207389b26a037e9ecdea58541e4ea407" Namespace="calico-system" Pod="calico-kube-controllers-d6f5f89f8-mz87q" WorkloadEndpoint="ci--4547.0.0--8--fbad3a37dc-k8s-calico--kube--controllers--d6f5f89f8--mz87q-eth0" Dec 16 03:28:38.008995 containerd[1596]: 2025-12-16 03:28:37.949 [INFO][4246] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c466ec50e6b380ca21d2885412dc606d207389b26a037e9ecdea58541e4ea407" Namespace="calico-system" Pod="calico-kube-controllers-d6f5f89f8-mz87q" WorkloadEndpoint="ci--4547.0.0--8--fbad3a37dc-k8s-calico--kube--controllers--d6f5f89f8--mz87q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--8--fbad3a37dc-k8s-calico--kube--controllers--d6f5f89f8--mz87q-eth0", GenerateName:"calico-kube-controllers-d6f5f89f8-", Namespace:"calico-system", SelfLink:"", UID:"34973ba8-5ee2-4cc3-a8d0-65270e641be0", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 28, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"d6f5f89f8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-8-fbad3a37dc", ContainerID:"c466ec50e6b380ca21d2885412dc606d207389b26a037e9ecdea58541e4ea407", Pod:"calico-kube-controllers-d6f5f89f8-mz87q", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.122.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6c23ab006a6", MAC:"2e:e1:0f:5b:29:78", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:28:38.008995 containerd[1596]: 2025-12-16 03:28:37.984 [INFO][4246] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c466ec50e6b380ca21d2885412dc606d207389b26a037e9ecdea58541e4ea407" Namespace="calico-system" Pod="calico-kube-controllers-d6f5f89f8-mz87q" WorkloadEndpoint="ci--4547.0.0--8--fbad3a37dc-k8s-calico--kube--controllers--d6f5f89f8--mz87q-eth0" Dec 16 03:28:38.019688 containerd[1596]: time="2025-12-16T03:28:38.019535504Z" level=info msg="connecting to shim 384d9d59cb89e7428f03725af23eb94ef5907fcd085f63085de543ae5fd67715" address="unix:///run/containerd/s/776b157b34ae42cd3fd2dd0cced5af33ef46ccd1ea474ea4d35e27a967b6d53d" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:28:38.047580 kubelet[2782]: I1216 03:28:38.047468 2782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-glpm2" podStartSLOduration=40.045028186 podStartE2EDuration="40.045028186s" podCreationTimestamp="2025-12-16 03:27:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:28:37.993444466 +0000 UTC m=+45.860656852" watchObservedRunningTime="2025-12-16 03:28:38.045028186 +0000 UTC m=+45.912240545" Dec 16 03:28:38.086900 containerd[1596]: time="2025-12-16T03:28:38.086842953Z" level=info msg="connecting to shim c466ec50e6b380ca21d2885412dc606d207389b26a037e9ecdea58541e4ea407" address="unix:///run/containerd/s/6e960f6a1446f50620a48799861bd494d411df68da2f0779f15b666883da329d" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:28:38.123363 systemd[1]: Started cri-containerd-384d9d59cb89e7428f03725af23eb94ef5907fcd085f63085de543ae5fd67715.scope - libcontainer container 384d9d59cb89e7428f03725af23eb94ef5907fcd085f63085de543ae5fd67715. Dec 16 03:28:38.183074 systemd[1]: Started cri-containerd-c466ec50e6b380ca21d2885412dc606d207389b26a037e9ecdea58541e4ea407.scope - libcontainer container c466ec50e6b380ca21d2885412dc606d207389b26a037e9ecdea58541e4ea407. Dec 16 03:28:38.252000 audit: BPF prog-id=205 op=LOAD Dec 16 03:28:38.253000 audit: BPF prog-id=206 op=LOAD Dec 16 03:28:38.253000 audit[4338]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4322 pid=4338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:38.253000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338346439643539636238396537343238663033373235616632336562 Dec 16 03:28:38.253000 audit: BPF prog-id=206 op=UNLOAD Dec 16 03:28:38.253000 audit[4338]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4322 pid=4338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:38.253000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338346439643539636238396537343238663033373235616632336562 Dec 16 03:28:38.253000 audit: BPF prog-id=207 op=LOAD Dec 16 03:28:38.253000 audit[4338]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4322 pid=4338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:38.253000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338346439643539636238396537343238663033373235616632336562 Dec 16 03:28:38.254000 audit: BPF prog-id=208 op=LOAD Dec 16 03:28:38.254000 audit[4338]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4322 pid=4338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:38.254000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338346439643539636238396537343238663033373235616632336562 Dec 16 03:28:38.254000 audit: BPF prog-id=208 op=UNLOAD Dec 16 03:28:38.254000 audit[4338]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4322 pid=4338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:38.254000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338346439643539636238396537343238663033373235616632336562 Dec 16 03:28:38.255000 audit: BPF prog-id=207 op=UNLOAD Dec 16 03:28:38.255000 audit[4338]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4322 pid=4338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:38.255000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338346439643539636238396537343238663033373235616632336562 Dec 16 03:28:38.255000 audit: BPF prog-id=209 op=LOAD Dec 16 03:28:38.255000 audit[4338]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4322 pid=4338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:38.255000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338346439643539636238396537343238663033373235616632336562 Dec 16 03:28:38.269000 audit[4402]: NETFILTER_CFG table=filter:119 family=2 entries=20 op=nft_register_rule pid=4402 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:28:38.269000 audit[4402]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffedc349530 a2=0 a3=7ffedc34951c items=0 ppid=2935 pid=4402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:38.269000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:28:38.279000 audit[4402]: NETFILTER_CFG table=nat:120 family=2 entries=14 op=nft_register_rule pid=4402 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:28:38.279000 audit[4402]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffedc349530 a2=0 a3=0 items=0 ppid=2935 pid=4402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:38.279000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:28:38.306000 audit: BPF prog-id=210 op=LOAD Dec 16 03:28:38.308000 audit: BPF prog-id=211 op=LOAD Dec 16 03:28:38.308000 audit[4371]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=4354 pid=4371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:38.308000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334363665633530653662333830636132316432383835343132646336 Dec 16 03:28:38.308000 audit: BPF prog-id=211 op=UNLOAD Dec 16 03:28:38.308000 audit[4371]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4354 pid=4371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:38.308000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334363665633530653662333830636132316432383835343132646336 Dec 16 03:28:38.309000 audit: BPF prog-id=212 op=LOAD Dec 16 03:28:38.309000 audit[4371]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=4354 pid=4371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:38.309000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334363665633530653662333830636132316432383835343132646336 Dec 16 03:28:38.309000 audit: BPF prog-id=213 op=LOAD Dec 16 03:28:38.309000 audit[4371]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=4354 pid=4371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:38.309000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334363665633530653662333830636132316432383835343132646336 Dec 16 03:28:38.310000 audit: BPF prog-id=213 op=UNLOAD Dec 16 03:28:38.310000 audit[4371]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4354 pid=4371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:38.310000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334363665633530653662333830636132316432383835343132646336 Dec 16 03:28:38.310000 audit: BPF prog-id=212 op=UNLOAD Dec 16 03:28:38.310000 audit[4371]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4354 pid=4371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:38.310000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334363665633530653662333830636132316432383835343132646336 Dec 16 03:28:38.310000 audit: BPF prog-id=214 op=LOAD Dec 16 03:28:38.310000 audit[4371]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=4354 pid=4371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:38.310000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334363665633530653662333830636132316432383835343132646336 Dec 16 03:28:38.353582 containerd[1596]: time="2025-12-16T03:28:38.353509584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-699cff98b-z4b6s,Uid:fb012d08-2ffd-46d4-bf1b-e4743471acfd,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"384d9d59cb89e7428f03725af23eb94ef5907fcd085f63085de543ae5fd67715\"" Dec 16 03:28:38.360182 containerd[1596]: time="2025-12-16T03:28:38.359998172Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 03:28:38.369000 audit: BPF prog-id=215 op=LOAD Dec 16 03:28:38.369000 audit[4415]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc51cf7b30 a2=98 a3=0 items=0 ppid=4042 pid=4415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:38.369000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:28:38.369000 audit: BPF prog-id=215 op=UNLOAD Dec 16 03:28:38.369000 audit[4415]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffc51cf7b00 a3=0 items=0 ppid=4042 pid=4415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:38.369000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:28:38.369000 audit: BPF prog-id=216 op=LOAD Dec 16 03:28:38.369000 audit[4415]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc51cf7940 a2=94 a3=54428f items=0 ppid=4042 pid=4415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:38.369000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:28:38.370000 audit: BPF prog-id=216 op=UNLOAD Dec 16 03:28:38.370000 audit[4415]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc51cf7940 a2=94 a3=54428f items=0 ppid=4042 pid=4415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:38.370000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:28:38.370000 audit: BPF prog-id=217 op=LOAD Dec 16 03:28:38.370000 audit[4415]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc51cf7970 a2=94 a3=2 items=0 ppid=4042 pid=4415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:38.370000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:28:38.370000 audit: BPF prog-id=217 op=UNLOAD Dec 16 03:28:38.370000 audit[4415]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc51cf7970 a2=0 a3=2 items=0 ppid=4042 pid=4415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:38.370000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:28:38.370000 audit: BPF prog-id=218 op=LOAD Dec 16 03:28:38.370000 audit[4415]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc51cf7720 a2=94 a3=4 items=0 ppid=4042 pid=4415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:38.370000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:28:38.370000 audit: BPF prog-id=218 op=UNLOAD Dec 16 03:28:38.370000 audit[4415]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc51cf7720 a2=94 a3=4 items=0 ppid=4042 pid=4415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:38.370000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:28:38.370000 audit: BPF prog-id=219 op=LOAD Dec 16 03:28:38.370000 audit[4415]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc51cf7820 a2=94 a3=7ffc51cf79a0 items=0 ppid=4042 pid=4415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:38.370000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:28:38.370000 audit: BPF prog-id=219 op=UNLOAD Dec 16 03:28:38.370000 audit[4415]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc51cf7820 a2=0 a3=7ffc51cf79a0 items=0 ppid=4042 pid=4415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:38.370000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:28:38.375000 audit: BPF prog-id=220 op=LOAD Dec 16 03:28:38.375000 audit[4415]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc51cf6f50 a2=94 a3=2 items=0 ppid=4042 pid=4415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:38.375000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:28:38.376000 audit: BPF prog-id=220 op=UNLOAD Dec 16 03:28:38.376000 audit[4415]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc51cf6f50 a2=0 a3=2 items=0 ppid=4042 pid=4415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:38.376000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:28:38.376000 audit: BPF prog-id=221 op=LOAD Dec 16 03:28:38.376000 audit[4415]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc51cf7050 a2=94 a3=30 items=0 ppid=4042 pid=4415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:38.376000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:28:38.413862 containerd[1596]: time="2025-12-16T03:28:38.413721711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hrg2x,Uid:8744312e-a06c-4ec6-97fa-99683d819e93,Namespace:calico-system,Attempt:0,}" Dec 16 03:28:38.417000 audit: BPF prog-id=222 op=LOAD Dec 16 03:28:38.417000 audit[4421]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffe38ad000 a2=98 a3=0 items=0 ppid=4042 pid=4421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:38.417000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:28:38.418000 audit: BPF prog-id=222 op=UNLOAD Dec 16 03:28:38.418000 audit[4421]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fffe38acfd0 a3=0 items=0 ppid=4042 pid=4421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:38.418000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:28:38.418000 audit: BPF prog-id=223 op=LOAD Dec 16 03:28:38.418000 audit[4421]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffe38acdf0 a2=94 a3=54428f items=0 ppid=4042 pid=4421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:38.418000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:28:38.419000 audit: BPF prog-id=223 op=UNLOAD Dec 16 03:28:38.419000 audit[4421]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fffe38acdf0 a2=94 a3=54428f items=0 ppid=4042 pid=4421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:38.419000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:28:38.419000 audit: BPF prog-id=224 op=LOAD Dec 16 03:28:38.419000 audit[4421]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffe38ace20 a2=94 a3=2 items=0 ppid=4042 pid=4421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:38.419000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:28:38.419000 audit: BPF prog-id=224 op=UNLOAD Dec 16 03:28:38.419000 audit[4421]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fffe38ace20 a2=0 a3=2 items=0 ppid=4042 pid=4421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:38.419000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:28:38.513178 containerd[1596]: time="2025-12-16T03:28:38.512981523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d6f5f89f8-mz87q,Uid:34973ba8-5ee2-4cc3-a8d0-65270e641be0,Namespace:calico-system,Attempt:0,} returns sandbox id \"c466ec50e6b380ca21d2885412dc606d207389b26a037e9ecdea58541e4ea407\"" Dec 16 03:28:38.593520 systemd-networkd[1494]: cali1e2a1909872: Gained IPv6LL Dec 16 03:28:38.682222 containerd[1596]: time="2025-12-16T03:28:38.681957154Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:28:38.686064 containerd[1596]: time="2025-12-16T03:28:38.685082915Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 03:28:38.686362 containerd[1596]: time="2025-12-16T03:28:38.686286680Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 03:28:38.686882 kubelet[2782]: E1216 03:28:38.686817 2782 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:28:38.687224 kubelet[2782]: E1216 03:28:38.686895 2782 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:28:38.687539 containerd[1596]: time="2025-12-16T03:28:38.687364409Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 03:28:38.687857 kubelet[2782]: E1216 03:28:38.687364 2782 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-699cff98b-z4b6s_calico-apiserver(fb012d08-2ffd-46d4-bf1b-e4743471acfd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 03:28:38.687857 kubelet[2782]: E1216 03:28:38.687696 2782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-699cff98b-z4b6s" podUID="fb012d08-2ffd-46d4-bf1b-e4743471acfd" Dec 16 03:28:38.704899 systemd-networkd[1494]: calicfdbfc4b066: Link UP Dec 16 03:28:38.707967 systemd-networkd[1494]: calicfdbfc4b066: Gained carrier Dec 16 03:28:38.745201 containerd[1596]: 2025-12-16 03:28:38.528 [INFO][4422] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--8--fbad3a37dc-k8s-csi--node--driver--hrg2x-eth0 csi-node-driver- calico-system 8744312e-a06c-4ec6-97fa-99683d819e93 720 0 2025-12-16 03:28:13 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4547.0.0-8-fbad3a37dc csi-node-driver-hrg2x eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calicfdbfc4b066 [] [] }} ContainerID="c2ba4ac700bcc5b9183a32331c71d98bf812ace3321ea414bb5b1b4fac53f66a" Namespace="calico-system" Pod="csi-node-driver-hrg2x" WorkloadEndpoint="ci--4547.0.0--8--fbad3a37dc-k8s-csi--node--driver--hrg2x-" Dec 16 03:28:38.745201 containerd[1596]: 2025-12-16 03:28:38.529 [INFO][4422] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c2ba4ac700bcc5b9183a32331c71d98bf812ace3321ea414bb5b1b4fac53f66a" Namespace="calico-system" Pod="csi-node-driver-hrg2x" WorkloadEndpoint="ci--4547.0.0--8--fbad3a37dc-k8s-csi--node--driver--hrg2x-eth0" Dec 16 03:28:38.745201 containerd[1596]: 2025-12-16 03:28:38.636 [INFO][4440] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c2ba4ac700bcc5b9183a32331c71d98bf812ace3321ea414bb5b1b4fac53f66a" HandleID="k8s-pod-network.c2ba4ac700bcc5b9183a32331c71d98bf812ace3321ea414bb5b1b4fac53f66a" Workload="ci--4547.0.0--8--fbad3a37dc-k8s-csi--node--driver--hrg2x-eth0" Dec 16 03:28:38.745201 containerd[1596]: 2025-12-16 03:28:38.636 [INFO][4440] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c2ba4ac700bcc5b9183a32331c71d98bf812ace3321ea414bb5b1b4fac53f66a" HandleID="k8s-pod-network.c2ba4ac700bcc5b9183a32331c71d98bf812ace3321ea414bb5b1b4fac53f66a" Workload="ci--4547.0.0--8--fbad3a37dc-k8s-csi--node--driver--hrg2x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f470), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547.0.0-8-fbad3a37dc", "pod":"csi-node-driver-hrg2x", "timestamp":"2025-12-16 03:28:38.636412983 +0000 UTC"}, Hostname:"ci-4547.0.0-8-fbad3a37dc", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 03:28:38.745201 containerd[1596]: 2025-12-16 03:28:38.636 [INFO][4440] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 03:28:38.745201 containerd[1596]: 2025-12-16 03:28:38.637 [INFO][4440] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 03:28:38.745201 containerd[1596]: 2025-12-16 03:28:38.637 [INFO][4440] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-8-fbad3a37dc' Dec 16 03:28:38.745201 containerd[1596]: 2025-12-16 03:28:38.648 [INFO][4440] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c2ba4ac700bcc5b9183a32331c71d98bf812ace3321ea414bb5b1b4fac53f66a" host="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:28:38.745201 containerd[1596]: 2025-12-16 03:28:38.656 [INFO][4440] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:28:38.745201 containerd[1596]: 2025-12-16 03:28:38.663 [INFO][4440] ipam/ipam.go 511: Trying affinity for 192.168.122.192/26 host="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:28:38.745201 containerd[1596]: 2025-12-16 03:28:38.668 [INFO][4440] ipam/ipam.go 158: Attempting to load block cidr=192.168.122.192/26 host="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:28:38.745201 containerd[1596]: 2025-12-16 03:28:38.671 [INFO][4440] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.122.192/26 host="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:28:38.745201 containerd[1596]: 2025-12-16 03:28:38.671 [INFO][4440] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.122.192/26 handle="k8s-pod-network.c2ba4ac700bcc5b9183a32331c71d98bf812ace3321ea414bb5b1b4fac53f66a" host="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:28:38.745201 containerd[1596]: 2025-12-16 03:28:38.674 [INFO][4440] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c2ba4ac700bcc5b9183a32331c71d98bf812ace3321ea414bb5b1b4fac53f66a Dec 16 03:28:38.745201 containerd[1596]: 2025-12-16 03:28:38.681 [INFO][4440] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.122.192/26 handle="k8s-pod-network.c2ba4ac700bcc5b9183a32331c71d98bf812ace3321ea414bb5b1b4fac53f66a" host="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:28:38.745201 containerd[1596]: 2025-12-16 03:28:38.692 [INFO][4440] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.122.197/26] block=192.168.122.192/26 handle="k8s-pod-network.c2ba4ac700bcc5b9183a32331c71d98bf812ace3321ea414bb5b1b4fac53f66a" host="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:28:38.745201 containerd[1596]: 2025-12-16 03:28:38.692 [INFO][4440] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.122.197/26] handle="k8s-pod-network.c2ba4ac700bcc5b9183a32331c71d98bf812ace3321ea414bb5b1b4fac53f66a" host="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:28:38.745201 containerd[1596]: 2025-12-16 03:28:38.692 [INFO][4440] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 03:28:38.745201 containerd[1596]: 2025-12-16 03:28:38.692 [INFO][4440] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.122.197/26] IPv6=[] ContainerID="c2ba4ac700bcc5b9183a32331c71d98bf812ace3321ea414bb5b1b4fac53f66a" HandleID="k8s-pod-network.c2ba4ac700bcc5b9183a32331c71d98bf812ace3321ea414bb5b1b4fac53f66a" Workload="ci--4547.0.0--8--fbad3a37dc-k8s-csi--node--driver--hrg2x-eth0" Dec 16 03:28:38.745896 containerd[1596]: 2025-12-16 03:28:38.697 [INFO][4422] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c2ba4ac700bcc5b9183a32331c71d98bf812ace3321ea414bb5b1b4fac53f66a" Namespace="calico-system" Pod="csi-node-driver-hrg2x" WorkloadEndpoint="ci--4547.0.0--8--fbad3a37dc-k8s-csi--node--driver--hrg2x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--8--fbad3a37dc-k8s-csi--node--driver--hrg2x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8744312e-a06c-4ec6-97fa-99683d819e93", ResourceVersion:"720", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 28, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-8-fbad3a37dc", ContainerID:"", Pod:"csi-node-driver-hrg2x", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.122.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicfdbfc4b066", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:28:38.745896 containerd[1596]: 2025-12-16 03:28:38.698 [INFO][4422] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.122.197/32] ContainerID="c2ba4ac700bcc5b9183a32331c71d98bf812ace3321ea414bb5b1b4fac53f66a" Namespace="calico-system" Pod="csi-node-driver-hrg2x" WorkloadEndpoint="ci--4547.0.0--8--fbad3a37dc-k8s-csi--node--driver--hrg2x-eth0" Dec 16 03:28:38.745896 containerd[1596]: 2025-12-16 03:28:38.698 [INFO][4422] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicfdbfc4b066 ContainerID="c2ba4ac700bcc5b9183a32331c71d98bf812ace3321ea414bb5b1b4fac53f66a" Namespace="calico-system" Pod="csi-node-driver-hrg2x" WorkloadEndpoint="ci--4547.0.0--8--fbad3a37dc-k8s-csi--node--driver--hrg2x-eth0" Dec 16 03:28:38.745896 containerd[1596]: 2025-12-16 03:28:38.708 [INFO][4422] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c2ba4ac700bcc5b9183a32331c71d98bf812ace3321ea414bb5b1b4fac53f66a" Namespace="calico-system" Pod="csi-node-driver-hrg2x" WorkloadEndpoint="ci--4547.0.0--8--fbad3a37dc-k8s-csi--node--driver--hrg2x-eth0" Dec 16 03:28:38.745896 containerd[1596]: 2025-12-16 03:28:38.710 [INFO][4422] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c2ba4ac700bcc5b9183a32331c71d98bf812ace3321ea414bb5b1b4fac53f66a" Namespace="calico-system" Pod="csi-node-driver-hrg2x" WorkloadEndpoint="ci--4547.0.0--8--fbad3a37dc-k8s-csi--node--driver--hrg2x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--8--fbad3a37dc-k8s-csi--node--driver--hrg2x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8744312e-a06c-4ec6-97fa-99683d819e93", ResourceVersion:"720", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 28, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-8-fbad3a37dc", ContainerID:"c2ba4ac700bcc5b9183a32331c71d98bf812ace3321ea414bb5b1b4fac53f66a", Pod:"csi-node-driver-hrg2x", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.122.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicfdbfc4b066", MAC:"96:12:81:8f:bf:da", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:28:38.745896 containerd[1596]: 2025-12-16 03:28:38.733 [INFO][4422] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c2ba4ac700bcc5b9183a32331c71d98bf812ace3321ea414bb5b1b4fac53f66a" Namespace="calico-system" Pod="csi-node-driver-hrg2x" WorkloadEndpoint="ci--4547.0.0--8--fbad3a37dc-k8s-csi--node--driver--hrg2x-eth0" Dec 16 03:28:38.792905 containerd[1596]: time="2025-12-16T03:28:38.792696132Z" level=info msg="connecting to shim c2ba4ac700bcc5b9183a32331c71d98bf812ace3321ea414bb5b1b4fac53f66a" address="unix:///run/containerd/s/832efbe91a248e1e012aa5147ee20db8927ef26ad579b12f67d807fbd220b308" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:28:38.838621 kubelet[2782]: E1216 03:28:38.838560 2782 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:28:38.842416 kubelet[2782]: E1216 03:28:38.842312 2782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-699cff98b-z4b6s" podUID="fb012d08-2ffd-46d4-bf1b-e4743471acfd" Dec 16 03:28:38.881730 systemd[1]: Started cri-containerd-c2ba4ac700bcc5b9183a32331c71d98bf812ace3321ea414bb5b1b4fac53f66a.scope - libcontainer container c2ba4ac700bcc5b9183a32331c71d98bf812ace3321ea414bb5b1b4fac53f66a. Dec 16 03:28:38.932000 audit: BPF prog-id=225 op=LOAD Dec 16 03:28:38.934000 audit: BPF prog-id=226 op=LOAD Dec 16 03:28:38.934000 audit[4472]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=4461 pid=4472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:38.934000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332626134616337303062636335623931383361333233333163373164 Dec 16 03:28:38.935000 audit: BPF prog-id=226 op=UNLOAD Dec 16 03:28:38.935000 audit[4472]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4461 pid=4472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:38.935000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332626134616337303062636335623931383361333233333163373164 Dec 16 03:28:38.935000 audit: BPF prog-id=227 op=LOAD Dec 16 03:28:38.935000 audit[4472]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=4461 pid=4472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:38.935000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332626134616337303062636335623931383361333233333163373164 Dec 16 03:28:38.935000 audit: BPF prog-id=228 op=LOAD Dec 16 03:28:38.935000 audit[4472]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=4461 pid=4472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:38.935000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332626134616337303062636335623931383361333233333163373164 Dec 16 03:28:38.935000 audit: BPF prog-id=228 op=UNLOAD Dec 16 03:28:38.935000 audit[4472]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4461 pid=4472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:38.935000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332626134616337303062636335623931383361333233333163373164 Dec 16 03:28:38.935000 audit: BPF prog-id=227 op=UNLOAD Dec 16 03:28:38.935000 audit[4472]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4461 pid=4472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:38.935000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332626134616337303062636335623931383361333233333163373164 Dec 16 03:28:38.935000 audit: BPF prog-id=229 op=LOAD Dec 16 03:28:38.935000 audit[4472]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=4461 pid=4472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:38.935000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332626134616337303062636335623931383361333233333163373164 Dec 16 03:28:38.976138 containerd[1596]: time="2025-12-16T03:28:38.976017296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hrg2x,Uid:8744312e-a06c-4ec6-97fa-99683d819e93,Namespace:calico-system,Attempt:0,} returns sandbox id \"c2ba4ac700bcc5b9183a32331c71d98bf812ace3321ea414bb5b1b4fac53f66a\"" Dec 16 03:28:38.992000 audit: BPF prog-id=230 op=LOAD Dec 16 03:28:38.993305 kernel: kauditd_printk_skb: 290 callbacks suppressed Dec 16 03:28:38.997568 kernel: audit: type=1334 audit(1765855718.992:677): prog-id=230 op=LOAD Dec 16 03:28:38.997622 kernel: audit: type=1300 audit(1765855718.992:677): arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffe38acce0 a2=94 a3=1 items=0 ppid=4042 pid=4421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:38.992000 audit[4421]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffe38acce0 a2=94 a3=1 items=0 ppid=4042 pid=4421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:38.992000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:28:39.007171 kernel: audit: type=1327 audit(1765855718.992:677): proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:28:39.000000 audit: BPF prog-id=230 op=UNLOAD Dec 16 03:28:39.000000 audit[4421]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fffe38acce0 a2=94 a3=1 items=0 ppid=4042 pid=4421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:39.011713 kernel: audit: type=1334 audit(1765855719.000:678): prog-id=230 op=UNLOAD Dec 16 03:28:39.011908 kernel: audit: type=1300 audit(1765855719.000:678): arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fffe38acce0 a2=94 a3=1 items=0 ppid=4042 pid=4421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:39.000000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:28:39.020281 kernel: audit: type=1327 audit(1765855719.000:678): proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:28:39.032000 audit: BPF prog-id=231 op=LOAD Dec 16 03:28:39.034147 kernel: audit: type=1334 audit(1765855719.032:679): prog-id=231 op=LOAD Dec 16 03:28:39.032000 audit[4421]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fffe38accd0 a2=94 a3=4 items=0 ppid=4042 pid=4421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:39.040131 kernel: audit: type=1300 audit(1765855719.032:679): arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fffe38accd0 a2=94 a3=4 items=0 ppid=4042 pid=4421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:39.040864 systemd-networkd[1494]: cali6c23ab006a6: Gained IPv6LL Dec 16 03:28:39.032000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:28:39.045588 containerd[1596]: time="2025-12-16T03:28:39.045411236Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:28:39.047181 kernel: audit: type=1327 audit(1765855719.032:679): proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:28:39.033000 audit: BPF prog-id=231 op=UNLOAD Dec 16 03:28:39.051262 kernel: audit: type=1334 audit(1765855719.033:680): prog-id=231 op=UNLOAD Dec 16 03:28:39.033000 audit[4421]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fffe38accd0 a2=0 a3=4 items=0 ppid=4042 pid=4421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:39.033000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:28:39.034000 audit: BPF prog-id=232 op=LOAD Dec 16 03:28:39.034000 audit[4421]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fffe38acb30 a2=94 a3=5 items=0 ppid=4042 pid=4421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:39.034000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:28:39.034000 audit: BPF prog-id=232 op=UNLOAD Dec 16 03:28:39.034000 audit[4421]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fffe38acb30 a2=0 a3=5 items=0 ppid=4042 pid=4421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:39.034000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:28:39.034000 audit: BPF prog-id=233 op=LOAD Dec 16 03:28:39.034000 audit[4421]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fffe38acd50 a2=94 a3=6 items=0 ppid=4042 pid=4421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:39.034000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:28:39.034000 audit: BPF prog-id=233 op=UNLOAD Dec 16 03:28:39.034000 audit[4421]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fffe38acd50 a2=0 a3=6 items=0 ppid=4042 pid=4421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:39.034000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:28:39.035000 audit: BPF prog-id=234 op=LOAD Dec 16 03:28:39.035000 audit[4421]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fffe38ac500 a2=94 a3=88 items=0 ppid=4042 pid=4421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:39.035000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:28:39.040000 audit: BPF prog-id=235 op=LOAD Dec 16 03:28:39.040000 audit[4421]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7fffe38ac380 a2=94 a3=2 items=0 ppid=4042 pid=4421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:39.040000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:28:39.040000 audit: BPF prog-id=235 op=UNLOAD Dec 16 03:28:39.040000 audit[4421]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7fffe38ac3b0 a2=0 a3=7fffe38ac4b0 items=0 ppid=4042 pid=4421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:39.040000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:28:39.041000 audit: BPF prog-id=234 op=UNLOAD Dec 16 03:28:39.041000 audit[4421]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=35df0d10 a2=0 a3=107f6f047332da28 items=0 ppid=4042 pid=4421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:39.041000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:28:39.056257 containerd[1596]: time="2025-12-16T03:28:39.049998592Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 03:28:39.056257 containerd[1596]: time="2025-12-16T03:28:39.050265969Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 03:28:39.057415 kubelet[2782]: E1216 03:28:39.057349 2782 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 03:28:39.057854 kubelet[2782]: E1216 03:28:39.057424 2782 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 03:28:39.057854 kubelet[2782]: E1216 03:28:39.057623 2782 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-d6f5f89f8-mz87q_calico-system(34973ba8-5ee2-4cc3-a8d0-65270e641be0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 03:28:39.057854 kubelet[2782]: E1216 03:28:39.057665 2782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-d6f5f89f8-mz87q" podUID="34973ba8-5ee2-4cc3-a8d0-65270e641be0" Dec 16 03:28:39.058448 containerd[1596]: time="2025-12-16T03:28:39.058180709Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 03:28:39.064000 audit: BPF prog-id=221 op=UNLOAD Dec 16 03:28:39.064000 audit[4042]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c00075a700 a2=0 a3=0 items=0 ppid=4028 pid=4042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:39.064000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Dec 16 03:28:39.274000 audit[4521]: NETFILTER_CFG table=nat:121 family=2 entries=15 op=nft_register_chain pid=4521 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:28:39.274000 audit[4521]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7fff9efa4700 a2=0 a3=7fff9efa46ec items=0 ppid=4042 pid=4521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:39.274000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:28:39.277000 audit[4522]: NETFILTER_CFG table=mangle:122 family=2 entries=16 op=nft_register_chain pid=4522 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:28:39.277000 audit[4522]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffd38347f70 a2=0 a3=7ffd38347f5c items=0 ppid=4042 pid=4522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:39.277000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:28:39.290000 audit[4520]: NETFILTER_CFG table=raw:123 family=2 entries=21 op=nft_register_chain pid=4520 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:28:39.290000 audit[4520]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffeebae26f0 a2=0 a3=7ffeebae26dc items=0 ppid=4042 pid=4520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:39.290000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:28:39.300000 audit[4525]: NETFILTER_CFG table=filter:124 family=2 entries=128 op=nft_register_chain pid=4525 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:28:39.300000 audit[4525]: SYSCALL arch=c000003e syscall=46 success=yes exit=72768 a0=3 a1=7ffead2ed800 a2=0 a3=7ffead2ed7ec items=0 ppid=4042 pid=4525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:39.300000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:28:39.367000 audit[4535]: NETFILTER_CFG table=filter:125 family=2 entries=17 op=nft_register_rule pid=4535 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:28:39.367000 audit[4535]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd97e07680 a2=0 a3=7ffd97e0766c items=0 ppid=2935 pid=4535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:39.367000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:28:39.375000 audit[4535]: NETFILTER_CFG table=nat:126 family=2 entries=35 op=nft_register_chain pid=4535 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:28:39.375000 audit[4535]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffd97e07680 a2=0 a3=7ffd97e0766c items=0 ppid=2935 pid=4535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:39.375000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:28:39.400000 audit[4536]: NETFILTER_CFG table=filter:127 family=2 entries=116 op=nft_register_chain pid=4536 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:28:39.400000 audit[4536]: SYSCALL arch=c000003e syscall=46 success=yes exit=65600 a0=3 a1=7fffd0090160 a2=0 a3=7fffd009014c items=0 ppid=4042 pid=4536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:39.400000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:28:39.402610 kubelet[2782]: E1216 03:28:39.402545 2782 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:28:39.403553 containerd[1596]: time="2025-12-16T03:28:39.403520114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-hphzw,Uid:4bd5b5ce-0952-4c36-9327-5163395763f4,Namespace:kube-system,Attempt:0,}" Dec 16 03:28:39.404415 containerd[1596]: time="2025-12-16T03:28:39.404361512Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:28:39.405673 containerd[1596]: time="2025-12-16T03:28:39.405541504Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 03:28:39.406138 containerd[1596]: time="2025-12-16T03:28:39.406005960Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 03:28:39.406806 kubelet[2782]: E1216 03:28:39.406742 2782 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 03:28:39.407102 kubelet[2782]: E1216 03:28:39.406893 2782 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 03:28:39.420844 kubelet[2782]: E1216 03:28:39.420361 2782 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-hrg2x_calico-system(8744312e-a06c-4ec6-97fa-99683d819e93): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 03:28:39.422828 containerd[1596]: time="2025-12-16T03:28:39.422750989Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 03:28:39.614008 systemd-networkd[1494]: cali304e08763ea: Link UP Dec 16 03:28:39.616076 systemd-networkd[1494]: cali304e08763ea: Gained carrier Dec 16 03:28:39.617374 systemd-networkd[1494]: cali1b6f8bf0bbd: Gained IPv6LL Dec 16 03:28:39.652856 containerd[1596]: 2025-12-16 03:28:39.498 [INFO][4542] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--8--fbad3a37dc-k8s-coredns--66bc5c9577--hphzw-eth0 coredns-66bc5c9577- kube-system 4bd5b5ce-0952-4c36-9327-5163395763f4 841 0 2025-12-16 03:27:58 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4547.0.0-8-fbad3a37dc coredns-66bc5c9577-hphzw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali304e08763ea [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="ef98eaf2a38428cb1603c948adb85144ee9c08e9b6b21b4c02af513a66663d04" Namespace="kube-system" Pod="coredns-66bc5c9577-hphzw" WorkloadEndpoint="ci--4547.0.0--8--fbad3a37dc-k8s-coredns--66bc5c9577--hphzw-" Dec 16 03:28:39.652856 containerd[1596]: 2025-12-16 03:28:39.498 [INFO][4542] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ef98eaf2a38428cb1603c948adb85144ee9c08e9b6b21b4c02af513a66663d04" Namespace="kube-system" Pod="coredns-66bc5c9577-hphzw" WorkloadEndpoint="ci--4547.0.0--8--fbad3a37dc-k8s-coredns--66bc5c9577--hphzw-eth0" Dec 16 03:28:39.652856 containerd[1596]: 2025-12-16 03:28:39.547 [INFO][4550] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ef98eaf2a38428cb1603c948adb85144ee9c08e9b6b21b4c02af513a66663d04" HandleID="k8s-pod-network.ef98eaf2a38428cb1603c948adb85144ee9c08e9b6b21b4c02af513a66663d04" Workload="ci--4547.0.0--8--fbad3a37dc-k8s-coredns--66bc5c9577--hphzw-eth0" Dec 16 03:28:39.652856 containerd[1596]: 2025-12-16 03:28:39.547 [INFO][4550] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ef98eaf2a38428cb1603c948adb85144ee9c08e9b6b21b4c02af513a66663d04" HandleID="k8s-pod-network.ef98eaf2a38428cb1603c948adb85144ee9c08e9b6b21b4c02af513a66663d04" Workload="ci--4547.0.0--8--fbad3a37dc-k8s-coredns--66bc5c9577--hphzw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f870), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4547.0.0-8-fbad3a37dc", "pod":"coredns-66bc5c9577-hphzw", "timestamp":"2025-12-16 03:28:39.547449065 +0000 UTC"}, Hostname:"ci-4547.0.0-8-fbad3a37dc", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 03:28:39.652856 containerd[1596]: 2025-12-16 03:28:39.547 [INFO][4550] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 03:28:39.652856 containerd[1596]: 2025-12-16 03:28:39.547 [INFO][4550] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 03:28:39.652856 containerd[1596]: 2025-12-16 03:28:39.547 [INFO][4550] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-8-fbad3a37dc' Dec 16 03:28:39.652856 containerd[1596]: 2025-12-16 03:28:39.558 [INFO][4550] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ef98eaf2a38428cb1603c948adb85144ee9c08e9b6b21b4c02af513a66663d04" host="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:28:39.652856 containerd[1596]: 2025-12-16 03:28:39.566 [INFO][4550] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:28:39.652856 containerd[1596]: 2025-12-16 03:28:39.574 [INFO][4550] ipam/ipam.go 511: Trying affinity for 192.168.122.192/26 host="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:28:39.652856 containerd[1596]: 2025-12-16 03:28:39.578 [INFO][4550] ipam/ipam.go 158: Attempting to load block cidr=192.168.122.192/26 host="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:28:39.652856 containerd[1596]: 2025-12-16 03:28:39.582 [INFO][4550] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.122.192/26 host="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:28:39.652856 containerd[1596]: 2025-12-16 03:28:39.582 [INFO][4550] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.122.192/26 handle="k8s-pod-network.ef98eaf2a38428cb1603c948adb85144ee9c08e9b6b21b4c02af513a66663d04" host="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:28:39.652856 containerd[1596]: 2025-12-16 03:28:39.586 [INFO][4550] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ef98eaf2a38428cb1603c948adb85144ee9c08e9b6b21b4c02af513a66663d04 Dec 16 03:28:39.652856 containerd[1596]: 2025-12-16 03:28:39.595 [INFO][4550] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.122.192/26 handle="k8s-pod-network.ef98eaf2a38428cb1603c948adb85144ee9c08e9b6b21b4c02af513a66663d04" host="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:28:39.652856 containerd[1596]: 2025-12-16 03:28:39.604 [INFO][4550] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.122.198/26] block=192.168.122.192/26 handle="k8s-pod-network.ef98eaf2a38428cb1603c948adb85144ee9c08e9b6b21b4c02af513a66663d04" host="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:28:39.652856 containerd[1596]: 2025-12-16 03:28:39.605 [INFO][4550] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.122.198/26] handle="k8s-pod-network.ef98eaf2a38428cb1603c948adb85144ee9c08e9b6b21b4c02af513a66663d04" host="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:28:39.652856 containerd[1596]: 2025-12-16 03:28:39.605 [INFO][4550] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 03:28:39.652856 containerd[1596]: 2025-12-16 03:28:39.605 [INFO][4550] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.122.198/26] IPv6=[] ContainerID="ef98eaf2a38428cb1603c948adb85144ee9c08e9b6b21b4c02af513a66663d04" HandleID="k8s-pod-network.ef98eaf2a38428cb1603c948adb85144ee9c08e9b6b21b4c02af513a66663d04" Workload="ci--4547.0.0--8--fbad3a37dc-k8s-coredns--66bc5c9577--hphzw-eth0" Dec 16 03:28:39.655639 containerd[1596]: 2025-12-16 03:28:39.609 [INFO][4542] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ef98eaf2a38428cb1603c948adb85144ee9c08e9b6b21b4c02af513a66663d04" Namespace="kube-system" Pod="coredns-66bc5c9577-hphzw" WorkloadEndpoint="ci--4547.0.0--8--fbad3a37dc-k8s-coredns--66bc5c9577--hphzw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--8--fbad3a37dc-k8s-coredns--66bc5c9577--hphzw-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"4bd5b5ce-0952-4c36-9327-5163395763f4", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 27, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-8-fbad3a37dc", ContainerID:"", Pod:"coredns-66bc5c9577-hphzw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.122.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali304e08763ea", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:28:39.655639 containerd[1596]: 2025-12-16 03:28:39.609 [INFO][4542] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.122.198/32] ContainerID="ef98eaf2a38428cb1603c948adb85144ee9c08e9b6b21b4c02af513a66663d04" Namespace="kube-system" Pod="coredns-66bc5c9577-hphzw" WorkloadEndpoint="ci--4547.0.0--8--fbad3a37dc-k8s-coredns--66bc5c9577--hphzw-eth0" Dec 16 03:28:39.655639 containerd[1596]: 2025-12-16 03:28:39.609 [INFO][4542] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali304e08763ea ContainerID="ef98eaf2a38428cb1603c948adb85144ee9c08e9b6b21b4c02af513a66663d04" Namespace="kube-system" Pod="coredns-66bc5c9577-hphzw" WorkloadEndpoint="ci--4547.0.0--8--fbad3a37dc-k8s-coredns--66bc5c9577--hphzw-eth0" Dec 16 03:28:39.655639 containerd[1596]: 2025-12-16 03:28:39.619 [INFO][4542] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ef98eaf2a38428cb1603c948adb85144ee9c08e9b6b21b4c02af513a66663d04" Namespace="kube-system" Pod="coredns-66bc5c9577-hphzw" WorkloadEndpoint="ci--4547.0.0--8--fbad3a37dc-k8s-coredns--66bc5c9577--hphzw-eth0" Dec 16 03:28:39.655639 containerd[1596]: 2025-12-16 03:28:39.622 [INFO][4542] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ef98eaf2a38428cb1603c948adb85144ee9c08e9b6b21b4c02af513a66663d04" Namespace="kube-system" Pod="coredns-66bc5c9577-hphzw" WorkloadEndpoint="ci--4547.0.0--8--fbad3a37dc-k8s-coredns--66bc5c9577--hphzw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--8--fbad3a37dc-k8s-coredns--66bc5c9577--hphzw-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"4bd5b5ce-0952-4c36-9327-5163395763f4", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 27, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-8-fbad3a37dc", ContainerID:"ef98eaf2a38428cb1603c948adb85144ee9c08e9b6b21b4c02af513a66663d04", Pod:"coredns-66bc5c9577-hphzw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.122.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali304e08763ea", MAC:"86:39:82:2f:9d:0f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:28:39.656298 containerd[1596]: 2025-12-16 03:28:39.647 [INFO][4542] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ef98eaf2a38428cb1603c948adb85144ee9c08e9b6b21b4c02af513a66663d04" Namespace="kube-system" Pod="coredns-66bc5c9577-hphzw" WorkloadEndpoint="ci--4547.0.0--8--fbad3a37dc-k8s-coredns--66bc5c9577--hphzw-eth0" Dec 16 03:28:39.694761 containerd[1596]: time="2025-12-16T03:28:39.694705502Z" level=info msg="connecting to shim ef98eaf2a38428cb1603c948adb85144ee9c08e9b6b21b4c02af513a66663d04" address="unix:///run/containerd/s/857fb80a241dcf283f135d00662125ba748558b16f3c28d730f7a370855261cd" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:28:39.719000 audit[4582]: NETFILTER_CFG table=filter:128 family=2 entries=44 op=nft_register_chain pid=4582 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:28:39.719000 audit[4582]: SYSCALL arch=c000003e syscall=46 success=yes exit=21516 a0=3 a1=7ffc1711cde0 a2=0 a3=7ffc1711cdcc items=0 ppid=4042 pid=4582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:39.719000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:28:39.744510 systemd-networkd[1494]: vxlan.calico: Gained IPv6LL Dec 16 03:28:39.745419 systemd[1]: Started cri-containerd-ef98eaf2a38428cb1603c948adb85144ee9c08e9b6b21b4c02af513a66663d04.scope - libcontainer container ef98eaf2a38428cb1603c948adb85144ee9c08e9b6b21b4c02af513a66663d04. Dec 16 03:28:39.768000 audit: BPF prog-id=236 op=LOAD Dec 16 03:28:39.769000 audit: BPF prog-id=237 op=LOAD Dec 16 03:28:39.769000 audit[4584]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=4573 pid=4584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:39.769000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566393865616632613338343238636231363033633934386164623835 Dec 16 03:28:39.769000 audit: BPF prog-id=237 op=UNLOAD Dec 16 03:28:39.769000 audit[4584]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4573 pid=4584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:39.769000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566393865616632613338343238636231363033633934386164623835 Dec 16 03:28:39.770000 audit: BPF prog-id=238 op=LOAD Dec 16 03:28:39.770000 audit[4584]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=4573 pid=4584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:39.770000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566393865616632613338343238636231363033633934386164623835 Dec 16 03:28:39.770000 audit: BPF prog-id=239 op=LOAD Dec 16 03:28:39.770000 audit[4584]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=4573 pid=4584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:39.770000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566393865616632613338343238636231363033633934386164623835 Dec 16 03:28:39.770000 audit: BPF prog-id=239 op=UNLOAD Dec 16 03:28:39.770000 audit[4584]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4573 pid=4584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:39.770000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566393865616632613338343238636231363033633934386164623835 Dec 16 03:28:39.770000 audit: BPF prog-id=238 op=UNLOAD Dec 16 03:28:39.770000 audit[4584]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4573 pid=4584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:39.770000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566393865616632613338343238636231363033633934386164623835 Dec 16 03:28:39.770000 audit: BPF prog-id=240 op=LOAD Dec 16 03:28:39.770000 audit[4584]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=4573 pid=4584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:39.770000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566393865616632613338343238636231363033633934386164623835 Dec 16 03:28:39.773740 containerd[1596]: time="2025-12-16T03:28:39.770658269Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:28:39.773740 containerd[1596]: time="2025-12-16T03:28:39.773196002Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 03:28:39.773740 containerd[1596]: time="2025-12-16T03:28:39.773274404Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 03:28:39.774759 kubelet[2782]: E1216 03:28:39.774703 2782 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 03:28:39.774884 kubelet[2782]: E1216 03:28:39.774765 2782 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 03:28:39.774884 kubelet[2782]: E1216 03:28:39.774849 2782 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-hrg2x_calico-system(8744312e-a06c-4ec6-97fa-99683d819e93): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 03:28:39.774951 kubelet[2782]: E1216 03:28:39.774897 2782 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hrg2x" podUID="8744312e-a06c-4ec6-97fa-99683d819e93" Dec 16 03:28:39.830391 containerd[1596]: time="2025-12-16T03:28:39.830338635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-hphzw,Uid:4bd5b5ce-0952-4c36-9327-5163395763f4,Namespace:kube-system,Attempt:0,} returns sandbox id \"ef98eaf2a38428cb1603c948adb85144ee9c08e9b6b21b4c02af513a66663d04\"" Dec 16 03:28:39.831563 kubelet[2782]: E1216 03:28:39.831535 2782 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:28:39.848120 containerd[1596]: time="2025-12-16T03:28:39.847612178Z" level=info msg="CreateContainer within sandbox \"ef98eaf2a38428cb1603c948adb85144ee9c08e9b6b21b4c02af513a66663d04\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 03:28:39.860260 containerd[1596]: time="2025-12-16T03:28:39.859399814Z" level=info msg="Container 57d878323f69c0ed341dda432e4a0e78e0d704dc4ab9c447aec06149dea57070: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:28:39.873995 kubelet[2782]: E1216 03:28:39.873868 2782 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:28:39.877733 kubelet[2782]: E1216 03:28:39.877684 2782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-d6f5f89f8-mz87q" podUID="34973ba8-5ee2-4cc3-a8d0-65270e641be0" Dec 16 03:28:39.879127 kubelet[2782]: E1216 03:28:39.879060 2782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-699cff98b-z4b6s" podUID="fb012d08-2ffd-46d4-bf1b-e4743471acfd" Dec 16 03:28:39.879927 kubelet[2782]: E1216 03:28:39.879861 2782 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hrg2x" podUID="8744312e-a06c-4ec6-97fa-99683d819e93" Dec 16 03:28:39.880640 containerd[1596]: time="2025-12-16T03:28:39.880557768Z" level=info msg="CreateContainer within sandbox \"ef98eaf2a38428cb1603c948adb85144ee9c08e9b6b21b4c02af513a66663d04\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"57d878323f69c0ed341dda432e4a0e78e0d704dc4ab9c447aec06149dea57070\"" Dec 16 03:28:39.882074 containerd[1596]: time="2025-12-16T03:28:39.881998255Z" level=info msg="StartContainer for \"57d878323f69c0ed341dda432e4a0e78e0d704dc4ab9c447aec06149dea57070\"" Dec 16 03:28:39.883568 containerd[1596]: time="2025-12-16T03:28:39.883508307Z" level=info msg="connecting to shim 57d878323f69c0ed341dda432e4a0e78e0d704dc4ab9c447aec06149dea57070" address="unix:///run/containerd/s/857fb80a241dcf283f135d00662125ba748558b16f3c28d730f7a370855261cd" protocol=ttrpc version=3 Dec 16 03:28:39.933373 systemd[1]: Started cri-containerd-57d878323f69c0ed341dda432e4a0e78e0d704dc4ab9c447aec06149dea57070.scope - libcontainer container 57d878323f69c0ed341dda432e4a0e78e0d704dc4ab9c447aec06149dea57070. Dec 16 03:28:39.975000 audit: BPF prog-id=241 op=LOAD Dec 16 03:28:39.977000 audit: BPF prog-id=242 op=LOAD Dec 16 03:28:39.977000 audit[4612]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=4573 pid=4612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:39.977000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537643837383332336636396330656433343164646134333265346130 Dec 16 03:28:39.978000 audit: BPF prog-id=242 op=UNLOAD Dec 16 03:28:39.978000 audit[4612]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4573 pid=4612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:39.978000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537643837383332336636396330656433343164646134333265346130 Dec 16 03:28:39.978000 audit: BPF prog-id=243 op=LOAD Dec 16 03:28:39.978000 audit[4612]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=4573 pid=4612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:39.978000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537643837383332336636396330656433343164646134333265346130 Dec 16 03:28:39.978000 audit: BPF prog-id=244 op=LOAD Dec 16 03:28:39.978000 audit[4612]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=4573 pid=4612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:39.978000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537643837383332336636396330656433343164646134333265346130 Dec 16 03:28:39.979000 audit: BPF prog-id=244 op=UNLOAD Dec 16 03:28:39.979000 audit[4612]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4573 pid=4612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:39.979000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537643837383332336636396330656433343164646134333265346130 Dec 16 03:28:39.980000 audit: BPF prog-id=243 op=UNLOAD Dec 16 03:28:39.980000 audit[4612]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4573 pid=4612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:39.980000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537643837383332336636396330656433343164646134333265346130 Dec 16 03:28:39.980000 audit: BPF prog-id=245 op=LOAD Dec 16 03:28:39.980000 audit[4612]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=4573 pid=4612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:39.980000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537643837383332336636396330656433343164646134333265346130 Dec 16 03:28:40.008375 containerd[1596]: time="2025-12-16T03:28:40.008260266Z" level=info msg="StartContainer for \"57d878323f69c0ed341dda432e4a0e78e0d704dc4ab9c447aec06149dea57070\" returns successfully" Dec 16 03:28:40.256382 systemd-networkd[1494]: calicfdbfc4b066: Gained IPv6LL Dec 16 03:28:40.403489 containerd[1596]: time="2025-12-16T03:28:40.403069179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-mx7nf,Uid:cff3841d-c916-42d7-ba21-c96ff077d2f0,Namespace:calico-system,Attempt:0,}" Dec 16 03:28:40.405016 containerd[1596]: time="2025-12-16T03:28:40.404932888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-699cff98b-rrdrk,Uid:49d300b0-388a-4cb6-b194-17855a6b768b,Namespace:calico-apiserver,Attempt:0,}" Dec 16 03:28:40.598076 systemd-networkd[1494]: cali5e81a978af8: Link UP Dec 16 03:28:40.599613 systemd-networkd[1494]: cali5e81a978af8: Gained carrier Dec 16 03:28:40.632051 containerd[1596]: 2025-12-16 03:28:40.475 [INFO][4646] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--8--fbad3a37dc-k8s-calico--apiserver--699cff98b--rrdrk-eth0 calico-apiserver-699cff98b- calico-apiserver 49d300b0-388a-4cb6-b194-17855a6b768b 847 0 2025-12-16 03:28:07 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:699cff98b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4547.0.0-8-fbad3a37dc calico-apiserver-699cff98b-rrdrk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5e81a978af8 [] [] }} ContainerID="0876f03fd40f31b4595b62e0743cda8f54ba86277d43ea365be57dd7e349de7d" Namespace="calico-apiserver" Pod="calico-apiserver-699cff98b-rrdrk" WorkloadEndpoint="ci--4547.0.0--8--fbad3a37dc-k8s-calico--apiserver--699cff98b--rrdrk-" Dec 16 03:28:40.632051 containerd[1596]: 2025-12-16 03:28:40.475 [INFO][4646] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0876f03fd40f31b4595b62e0743cda8f54ba86277d43ea365be57dd7e349de7d" Namespace="calico-apiserver" Pod="calico-apiserver-699cff98b-rrdrk" WorkloadEndpoint="ci--4547.0.0--8--fbad3a37dc-k8s-calico--apiserver--699cff98b--rrdrk-eth0" Dec 16 03:28:40.632051 containerd[1596]: 2025-12-16 03:28:40.536 [INFO][4669] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0876f03fd40f31b4595b62e0743cda8f54ba86277d43ea365be57dd7e349de7d" HandleID="k8s-pod-network.0876f03fd40f31b4595b62e0743cda8f54ba86277d43ea365be57dd7e349de7d" Workload="ci--4547.0.0--8--fbad3a37dc-k8s-calico--apiserver--699cff98b--rrdrk-eth0" Dec 16 03:28:40.632051 containerd[1596]: 2025-12-16 03:28:40.537 [INFO][4669] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0876f03fd40f31b4595b62e0743cda8f54ba86277d43ea365be57dd7e349de7d" HandleID="k8s-pod-network.0876f03fd40f31b4595b62e0743cda8f54ba86277d43ea365be57dd7e349de7d" Workload="ci--4547.0.0--8--fbad3a37dc-k8s-calico--apiserver--699cff98b--rrdrk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cf9a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4547.0.0-8-fbad3a37dc", "pod":"calico-apiserver-699cff98b-rrdrk", "timestamp":"2025-12-16 03:28:40.536382462 +0000 UTC"}, Hostname:"ci-4547.0.0-8-fbad3a37dc", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 03:28:40.632051 containerd[1596]: 2025-12-16 03:28:40.537 [INFO][4669] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 03:28:40.632051 containerd[1596]: 2025-12-16 03:28:40.537 [INFO][4669] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 03:28:40.632051 containerd[1596]: 2025-12-16 03:28:40.537 [INFO][4669] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-8-fbad3a37dc' Dec 16 03:28:40.632051 containerd[1596]: 2025-12-16 03:28:40.546 [INFO][4669] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0876f03fd40f31b4595b62e0743cda8f54ba86277d43ea365be57dd7e349de7d" host="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:28:40.632051 containerd[1596]: 2025-12-16 03:28:40.552 [INFO][4669] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:28:40.632051 containerd[1596]: 2025-12-16 03:28:40.558 [INFO][4669] ipam/ipam.go 511: Trying affinity for 192.168.122.192/26 host="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:28:40.632051 containerd[1596]: 2025-12-16 03:28:40.561 [INFO][4669] ipam/ipam.go 158: Attempting to load block cidr=192.168.122.192/26 host="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:28:40.632051 containerd[1596]: 2025-12-16 03:28:40.564 [INFO][4669] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.122.192/26 host="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:28:40.632051 containerd[1596]: 2025-12-16 03:28:40.564 [INFO][4669] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.122.192/26 handle="k8s-pod-network.0876f03fd40f31b4595b62e0743cda8f54ba86277d43ea365be57dd7e349de7d" host="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:28:40.632051 containerd[1596]: 2025-12-16 03:28:40.566 [INFO][4669] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0876f03fd40f31b4595b62e0743cda8f54ba86277d43ea365be57dd7e349de7d Dec 16 03:28:40.632051 containerd[1596]: 2025-12-16 03:28:40.572 [INFO][4669] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.122.192/26 handle="k8s-pod-network.0876f03fd40f31b4595b62e0743cda8f54ba86277d43ea365be57dd7e349de7d" host="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:28:40.632051 containerd[1596]: 2025-12-16 03:28:40.581 [INFO][4669] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.122.199/26] block=192.168.122.192/26 handle="k8s-pod-network.0876f03fd40f31b4595b62e0743cda8f54ba86277d43ea365be57dd7e349de7d" host="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:28:40.632051 containerd[1596]: 2025-12-16 03:28:40.581 [INFO][4669] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.122.199/26] handle="k8s-pod-network.0876f03fd40f31b4595b62e0743cda8f54ba86277d43ea365be57dd7e349de7d" host="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:28:40.632051 containerd[1596]: 2025-12-16 03:28:40.581 [INFO][4669] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 03:28:40.632051 containerd[1596]: 2025-12-16 03:28:40.582 [INFO][4669] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.122.199/26] IPv6=[] ContainerID="0876f03fd40f31b4595b62e0743cda8f54ba86277d43ea365be57dd7e349de7d" HandleID="k8s-pod-network.0876f03fd40f31b4595b62e0743cda8f54ba86277d43ea365be57dd7e349de7d" Workload="ci--4547.0.0--8--fbad3a37dc-k8s-calico--apiserver--699cff98b--rrdrk-eth0" Dec 16 03:28:40.634164 containerd[1596]: 2025-12-16 03:28:40.590 [INFO][4646] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0876f03fd40f31b4595b62e0743cda8f54ba86277d43ea365be57dd7e349de7d" Namespace="calico-apiserver" Pod="calico-apiserver-699cff98b-rrdrk" WorkloadEndpoint="ci--4547.0.0--8--fbad3a37dc-k8s-calico--apiserver--699cff98b--rrdrk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--8--fbad3a37dc-k8s-calico--apiserver--699cff98b--rrdrk-eth0", GenerateName:"calico-apiserver-699cff98b-", Namespace:"calico-apiserver", SelfLink:"", UID:"49d300b0-388a-4cb6-b194-17855a6b768b", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 28, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"699cff98b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-8-fbad3a37dc", ContainerID:"", Pod:"calico-apiserver-699cff98b-rrdrk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.122.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5e81a978af8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:28:40.634164 containerd[1596]: 2025-12-16 03:28:40.590 [INFO][4646] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.122.199/32] ContainerID="0876f03fd40f31b4595b62e0743cda8f54ba86277d43ea365be57dd7e349de7d" Namespace="calico-apiserver" Pod="calico-apiserver-699cff98b-rrdrk" WorkloadEndpoint="ci--4547.0.0--8--fbad3a37dc-k8s-calico--apiserver--699cff98b--rrdrk-eth0" Dec 16 03:28:40.634164 containerd[1596]: 2025-12-16 03:28:40.590 [INFO][4646] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5e81a978af8 ContainerID="0876f03fd40f31b4595b62e0743cda8f54ba86277d43ea365be57dd7e349de7d" Namespace="calico-apiserver" Pod="calico-apiserver-699cff98b-rrdrk" WorkloadEndpoint="ci--4547.0.0--8--fbad3a37dc-k8s-calico--apiserver--699cff98b--rrdrk-eth0" Dec 16 03:28:40.634164 containerd[1596]: 2025-12-16 03:28:40.600 [INFO][4646] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0876f03fd40f31b4595b62e0743cda8f54ba86277d43ea365be57dd7e349de7d" Namespace="calico-apiserver" Pod="calico-apiserver-699cff98b-rrdrk" WorkloadEndpoint="ci--4547.0.0--8--fbad3a37dc-k8s-calico--apiserver--699cff98b--rrdrk-eth0" Dec 16 03:28:40.634164 containerd[1596]: 2025-12-16 03:28:40.602 [INFO][4646] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0876f03fd40f31b4595b62e0743cda8f54ba86277d43ea365be57dd7e349de7d" Namespace="calico-apiserver" Pod="calico-apiserver-699cff98b-rrdrk" WorkloadEndpoint="ci--4547.0.0--8--fbad3a37dc-k8s-calico--apiserver--699cff98b--rrdrk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--8--fbad3a37dc-k8s-calico--apiserver--699cff98b--rrdrk-eth0", GenerateName:"calico-apiserver-699cff98b-", Namespace:"calico-apiserver", SelfLink:"", UID:"49d300b0-388a-4cb6-b194-17855a6b768b", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 28, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"699cff98b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-8-fbad3a37dc", ContainerID:"0876f03fd40f31b4595b62e0743cda8f54ba86277d43ea365be57dd7e349de7d", Pod:"calico-apiserver-699cff98b-rrdrk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.122.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5e81a978af8", MAC:"42:81:45:85:57:33", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:28:40.634164 containerd[1596]: 2025-12-16 03:28:40.617 [INFO][4646] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0876f03fd40f31b4595b62e0743cda8f54ba86277d43ea365be57dd7e349de7d" Namespace="calico-apiserver" Pod="calico-apiserver-699cff98b-rrdrk" WorkloadEndpoint="ci--4547.0.0--8--fbad3a37dc-k8s-calico--apiserver--699cff98b--rrdrk-eth0" Dec 16 03:28:40.677506 containerd[1596]: time="2025-12-16T03:28:40.676716962Z" level=info msg="connecting to shim 0876f03fd40f31b4595b62e0743cda8f54ba86277d43ea365be57dd7e349de7d" address="unix:///run/containerd/s/9a698bd96ef925cc818512929c0298d06d361e26a480682228522e94ca23daa5" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:28:40.693000 audit[4701]: NETFILTER_CFG table=filter:129 family=2 entries=53 op=nft_register_chain pid=4701 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:28:40.693000 audit[4701]: SYSCALL arch=c000003e syscall=46 success=yes exit=26624 a0=3 a1=7ffcef9a1f70 a2=0 a3=7ffcef9a1f5c items=0 ppid=4042 pid=4701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:40.693000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:28:40.742716 systemd-networkd[1494]: cali6152a00d6a1: Link UP Dec 16 03:28:40.746468 systemd-networkd[1494]: cali6152a00d6a1: Gained carrier Dec 16 03:28:40.793921 containerd[1596]: 2025-12-16 03:28:40.487 [INFO][4650] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--8--fbad3a37dc-k8s-goldmane--7c778bb748--mx7nf-eth0 goldmane-7c778bb748- calico-system cff3841d-c916-42d7-ba21-c96ff077d2f0 850 0 2025-12-16 03:28:11 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4547.0.0-8-fbad3a37dc goldmane-7c778bb748-mx7nf eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali6152a00d6a1 [] [] }} ContainerID="43711bdbc54843fa842ab4cdb4af9c098cac96512f0dd41f3e52ffa94d764866" Namespace="calico-system" Pod="goldmane-7c778bb748-mx7nf" WorkloadEndpoint="ci--4547.0.0--8--fbad3a37dc-k8s-goldmane--7c778bb748--mx7nf-" Dec 16 03:28:40.793921 containerd[1596]: 2025-12-16 03:28:40.488 [INFO][4650] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="43711bdbc54843fa842ab4cdb4af9c098cac96512f0dd41f3e52ffa94d764866" Namespace="calico-system" Pod="goldmane-7c778bb748-mx7nf" WorkloadEndpoint="ci--4547.0.0--8--fbad3a37dc-k8s-goldmane--7c778bb748--mx7nf-eth0" Dec 16 03:28:40.793921 containerd[1596]: 2025-12-16 03:28:40.543 [INFO][4674] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="43711bdbc54843fa842ab4cdb4af9c098cac96512f0dd41f3e52ffa94d764866" HandleID="k8s-pod-network.43711bdbc54843fa842ab4cdb4af9c098cac96512f0dd41f3e52ffa94d764866" Workload="ci--4547.0.0--8--fbad3a37dc-k8s-goldmane--7c778bb748--mx7nf-eth0" Dec 16 03:28:40.793921 containerd[1596]: 2025-12-16 03:28:40.544 [INFO][4674] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="43711bdbc54843fa842ab4cdb4af9c098cac96512f0dd41f3e52ffa94d764866" HandleID="k8s-pod-network.43711bdbc54843fa842ab4cdb4af9c098cac96512f0dd41f3e52ffa94d764866" Workload="ci--4547.0.0--8--fbad3a37dc-k8s-goldmane--7c778bb748--mx7nf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d59e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547.0.0-8-fbad3a37dc", "pod":"goldmane-7c778bb748-mx7nf", "timestamp":"2025-12-16 03:28:40.543314734 +0000 UTC"}, Hostname:"ci-4547.0.0-8-fbad3a37dc", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 03:28:40.793921 containerd[1596]: 2025-12-16 03:28:40.544 [INFO][4674] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 03:28:40.793921 containerd[1596]: 2025-12-16 03:28:40.581 [INFO][4674] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 03:28:40.793921 containerd[1596]: 2025-12-16 03:28:40.582 [INFO][4674] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-8-fbad3a37dc' Dec 16 03:28:40.793921 containerd[1596]: 2025-12-16 03:28:40.649 [INFO][4674] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.43711bdbc54843fa842ab4cdb4af9c098cac96512f0dd41f3e52ffa94d764866" host="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:28:40.793921 containerd[1596]: 2025-12-16 03:28:40.662 [INFO][4674] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:28:40.793921 containerd[1596]: 2025-12-16 03:28:40.683 [INFO][4674] ipam/ipam.go 511: Trying affinity for 192.168.122.192/26 host="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:28:40.793921 containerd[1596]: 2025-12-16 03:28:40.687 [INFO][4674] ipam/ipam.go 158: Attempting to load block cidr=192.168.122.192/26 host="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:28:40.793921 containerd[1596]: 2025-12-16 03:28:40.694 [INFO][4674] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.122.192/26 host="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:28:40.793921 containerd[1596]: 2025-12-16 03:28:40.694 [INFO][4674] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.122.192/26 handle="k8s-pod-network.43711bdbc54843fa842ab4cdb4af9c098cac96512f0dd41f3e52ffa94d764866" host="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:28:40.793921 containerd[1596]: 2025-12-16 03:28:40.701 [INFO][4674] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.43711bdbc54843fa842ab4cdb4af9c098cac96512f0dd41f3e52ffa94d764866 Dec 16 03:28:40.793921 containerd[1596]: 2025-12-16 03:28:40.718 [INFO][4674] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.122.192/26 handle="k8s-pod-network.43711bdbc54843fa842ab4cdb4af9c098cac96512f0dd41f3e52ffa94d764866" host="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:28:40.793921 containerd[1596]: 2025-12-16 03:28:40.728 [INFO][4674] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.122.200/26] block=192.168.122.192/26 handle="k8s-pod-network.43711bdbc54843fa842ab4cdb4af9c098cac96512f0dd41f3e52ffa94d764866" host="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:28:40.793921 containerd[1596]: 2025-12-16 03:28:40.728 [INFO][4674] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.122.200/26] handle="k8s-pod-network.43711bdbc54843fa842ab4cdb4af9c098cac96512f0dd41f3e52ffa94d764866" host="ci-4547.0.0-8-fbad3a37dc" Dec 16 03:28:40.793921 containerd[1596]: 2025-12-16 03:28:40.728 [INFO][4674] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 03:28:40.793921 containerd[1596]: 2025-12-16 03:28:40.728 [INFO][4674] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.122.200/26] IPv6=[] ContainerID="43711bdbc54843fa842ab4cdb4af9c098cac96512f0dd41f3e52ffa94d764866" HandleID="k8s-pod-network.43711bdbc54843fa842ab4cdb4af9c098cac96512f0dd41f3e52ffa94d764866" Workload="ci--4547.0.0--8--fbad3a37dc-k8s-goldmane--7c778bb748--mx7nf-eth0" Dec 16 03:28:40.795834 containerd[1596]: 2025-12-16 03:28:40.734 [INFO][4650] cni-plugin/k8s.go 418: Populated endpoint ContainerID="43711bdbc54843fa842ab4cdb4af9c098cac96512f0dd41f3e52ffa94d764866" Namespace="calico-system" Pod="goldmane-7c778bb748-mx7nf" WorkloadEndpoint="ci--4547.0.0--8--fbad3a37dc-k8s-goldmane--7c778bb748--mx7nf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--8--fbad3a37dc-k8s-goldmane--7c778bb748--mx7nf-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"cff3841d-c916-42d7-ba21-c96ff077d2f0", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 28, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-8-fbad3a37dc", ContainerID:"", Pod:"goldmane-7c778bb748-mx7nf", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.122.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6152a00d6a1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:28:40.795834 containerd[1596]: 2025-12-16 03:28:40.737 [INFO][4650] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.122.200/32] ContainerID="43711bdbc54843fa842ab4cdb4af9c098cac96512f0dd41f3e52ffa94d764866" Namespace="calico-system" Pod="goldmane-7c778bb748-mx7nf" WorkloadEndpoint="ci--4547.0.0--8--fbad3a37dc-k8s-goldmane--7c778bb748--mx7nf-eth0" Dec 16 03:28:40.795834 containerd[1596]: 2025-12-16 03:28:40.738 [INFO][4650] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6152a00d6a1 ContainerID="43711bdbc54843fa842ab4cdb4af9c098cac96512f0dd41f3e52ffa94d764866" Namespace="calico-system" Pod="goldmane-7c778bb748-mx7nf" WorkloadEndpoint="ci--4547.0.0--8--fbad3a37dc-k8s-goldmane--7c778bb748--mx7nf-eth0" Dec 16 03:28:40.795834 containerd[1596]: 2025-12-16 03:28:40.742 [INFO][4650] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="43711bdbc54843fa842ab4cdb4af9c098cac96512f0dd41f3e52ffa94d764866" Namespace="calico-system" Pod="goldmane-7c778bb748-mx7nf" WorkloadEndpoint="ci--4547.0.0--8--fbad3a37dc-k8s-goldmane--7c778bb748--mx7nf-eth0" Dec 16 03:28:40.795834 containerd[1596]: 2025-12-16 03:28:40.743 [INFO][4650] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="43711bdbc54843fa842ab4cdb4af9c098cac96512f0dd41f3e52ffa94d764866" Namespace="calico-system" Pod="goldmane-7c778bb748-mx7nf" WorkloadEndpoint="ci--4547.0.0--8--fbad3a37dc-k8s-goldmane--7c778bb748--mx7nf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--8--fbad3a37dc-k8s-goldmane--7c778bb748--mx7nf-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"cff3841d-c916-42d7-ba21-c96ff077d2f0", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 28, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-8-fbad3a37dc", ContainerID:"43711bdbc54843fa842ab4cdb4af9c098cac96512f0dd41f3e52ffa94d764866", Pod:"goldmane-7c778bb748-mx7nf", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.122.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6152a00d6a1", MAC:"b2:51:50:bd:46:b7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:28:40.795834 containerd[1596]: 2025-12-16 03:28:40.766 [INFO][4650] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="43711bdbc54843fa842ab4cdb4af9c098cac96512f0dd41f3e52ffa94d764866" Namespace="calico-system" Pod="goldmane-7c778bb748-mx7nf" WorkloadEndpoint="ci--4547.0.0--8--fbad3a37dc-k8s-goldmane--7c778bb748--mx7nf-eth0" Dec 16 03:28:40.807554 systemd[1]: Started cri-containerd-0876f03fd40f31b4595b62e0743cda8f54ba86277d43ea365be57dd7e349de7d.scope - libcontainer container 0876f03fd40f31b4595b62e0743cda8f54ba86277d43ea365be57dd7e349de7d. Dec 16 03:28:40.851053 containerd[1596]: time="2025-12-16T03:28:40.850905561Z" level=info msg="connecting to shim 43711bdbc54843fa842ab4cdb4af9c098cac96512f0dd41f3e52ffa94d764866" address="unix:///run/containerd/s/71458a21c9b3974cccea28349bc5175b66d2a45246f695df3d7ad9cc35a7b7b0" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:28:40.887000 audit: BPF prog-id=246 op=LOAD Dec 16 03:28:40.889000 audit: BPF prog-id=247 op=LOAD Dec 16 03:28:40.889000 audit[4714]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=4702 pid=4714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:40.889000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038373666303366643430663331623435393562363265303734336364 Dec 16 03:28:40.889000 audit: BPF prog-id=247 op=UNLOAD Dec 16 03:28:40.889000 audit[4714]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4702 pid=4714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:40.889000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038373666303366643430663331623435393562363265303734336364 Dec 16 03:28:40.889000 audit: BPF prog-id=248 op=LOAD Dec 16 03:28:40.889000 audit[4714]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=4702 pid=4714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:40.889000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038373666303366643430663331623435393562363265303734336364 Dec 16 03:28:40.890000 audit: BPF prog-id=249 op=LOAD Dec 16 03:28:40.890000 audit[4714]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=4702 pid=4714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:40.890000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038373666303366643430663331623435393562363265303734336364 Dec 16 03:28:40.890000 audit: BPF prog-id=249 op=UNLOAD Dec 16 03:28:40.890000 audit[4714]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4702 pid=4714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:40.890000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038373666303366643430663331623435393562363265303734336364 Dec 16 03:28:40.890000 audit: BPF prog-id=248 op=UNLOAD Dec 16 03:28:40.890000 audit[4714]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4702 pid=4714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:40.890000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038373666303366643430663331623435393562363265303734336364 Dec 16 03:28:40.890000 audit: BPF prog-id=250 op=LOAD Dec 16 03:28:40.890000 audit[4714]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=4702 pid=4714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:40.890000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038373666303366643430663331623435393562363265303734336364 Dec 16 03:28:40.913214 kubelet[2782]: E1216 03:28:40.912706 2782 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:28:40.916719 kubelet[2782]: E1216 03:28:40.916581 2782 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hrg2x" podUID="8744312e-a06c-4ec6-97fa-99683d819e93" Dec 16 03:28:40.945131 kubelet[2782]: I1216 03:28:40.943872 2782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-hphzw" podStartSLOduration=42.943848064 podStartE2EDuration="42.943848064s" podCreationTimestamp="2025-12-16 03:27:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:28:40.943219948 +0000 UTC m=+48.810432310" watchObservedRunningTime="2025-12-16 03:28:40.943848064 +0000 UTC m=+48.811060454" Dec 16 03:28:40.962513 systemd[1]: Started cri-containerd-43711bdbc54843fa842ab4cdb4af9c098cac96512f0dd41f3e52ffa94d764866.scope - libcontainer container 43711bdbc54843fa842ab4cdb4af9c098cac96512f0dd41f3e52ffa94d764866. Dec 16 03:28:40.998000 audit[4782]: NETFILTER_CFG table=filter:130 family=2 entries=70 op=nft_register_chain pid=4782 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:28:40.998000 audit[4782]: SYSCALL arch=c000003e syscall=46 success=yes exit=33956 a0=3 a1=7fff3e7e5760 a2=0 a3=7fff3e7e574c items=0 ppid=4042 pid=4782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:40.998000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:28:41.084000 audit[4785]: NETFILTER_CFG table=filter:131 family=2 entries=14 op=nft_register_rule pid=4785 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:28:41.084000 audit[4785]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffdc49150b0 a2=0 a3=7ffdc491509c items=0 ppid=2935 pid=4785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:41.084000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:28:41.096000 audit[4785]: NETFILTER_CFG table=nat:132 family=2 entries=44 op=nft_register_rule pid=4785 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:28:41.096000 audit[4785]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffdc49150b0 a2=0 a3=7ffdc491509c items=0 ppid=2935 pid=4785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:41.096000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:28:41.101000 audit: BPF prog-id=251 op=LOAD Dec 16 03:28:41.104000 audit: BPF prog-id=252 op=LOAD Dec 16 03:28:41.104000 audit[4763]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=4750 pid=4763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:41.104000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433373131626462633534383433666138343261623463646234616639 Dec 16 03:28:41.104000 audit: BPF prog-id=252 op=UNLOAD Dec 16 03:28:41.104000 audit[4763]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4750 pid=4763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:41.104000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433373131626462633534383433666138343261623463646234616639 Dec 16 03:28:41.105000 audit: BPF prog-id=253 op=LOAD Dec 16 03:28:41.105000 audit[4763]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=4750 pid=4763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:41.105000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433373131626462633534383433666138343261623463646234616639 Dec 16 03:28:41.105000 audit: BPF prog-id=254 op=LOAD Dec 16 03:28:41.105000 audit[4763]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=4750 pid=4763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:41.105000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433373131626462633534383433666138343261623463646234616639 Dec 16 03:28:41.105000 audit: BPF prog-id=254 op=UNLOAD Dec 16 03:28:41.105000 audit[4763]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4750 pid=4763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:41.105000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433373131626462633534383433666138343261623463646234616639 Dec 16 03:28:41.105000 audit: BPF prog-id=253 op=UNLOAD Dec 16 03:28:41.105000 audit[4763]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4750 pid=4763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:41.105000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433373131626462633534383433666138343261623463646234616639 Dec 16 03:28:41.105000 audit: BPF prog-id=255 op=LOAD Dec 16 03:28:41.105000 audit[4763]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=4750 pid=4763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:41.105000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433373131626462633534383433666138343261623463646234616639 Dec 16 03:28:41.188173 containerd[1596]: time="2025-12-16T03:28:41.188125889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-mx7nf,Uid:cff3841d-c916-42d7-ba21-c96ff077d2f0,Namespace:calico-system,Attempt:0,} returns sandbox id \"43711bdbc54843fa842ab4cdb4af9c098cac96512f0dd41f3e52ffa94d764866\"" Dec 16 03:28:41.196357 containerd[1596]: time="2025-12-16T03:28:41.196161066Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 03:28:41.206992 containerd[1596]: time="2025-12-16T03:28:41.206815903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-699cff98b-rrdrk,Uid:49d300b0-388a-4cb6-b194-17855a6b768b,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"0876f03fd40f31b4595b62e0743cda8f54ba86277d43ea365be57dd7e349de7d\"" Dec 16 03:28:41.344666 systemd-networkd[1494]: cali304e08763ea: Gained IPv6LL Dec 16 03:28:41.533757 containerd[1596]: time="2025-12-16T03:28:41.533685497Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:28:41.534744 containerd[1596]: time="2025-12-16T03:28:41.534629320Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 03:28:41.534744 containerd[1596]: time="2025-12-16T03:28:41.534688854Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 03:28:41.535203 kubelet[2782]: E1216 03:28:41.535156 2782 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 03:28:41.535526 kubelet[2782]: E1216 03:28:41.535401 2782 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 03:28:41.535766 kubelet[2782]: E1216 03:28:41.535721 2782 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-mx7nf_calico-system(cff3841d-c916-42d7-ba21-c96ff077d2f0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 03:28:41.535841 kubelet[2782]: E1216 03:28:41.535801 2782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-mx7nf" podUID="cff3841d-c916-42d7-ba21-c96ff077d2f0" Dec 16 03:28:41.537400 containerd[1596]: time="2025-12-16T03:28:41.537344994Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 03:28:41.856516 systemd-networkd[1494]: cali5e81a978af8: Gained IPv6LL Dec 16 03:28:41.888977 containerd[1596]: time="2025-12-16T03:28:41.888796052Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:28:41.890526 containerd[1596]: time="2025-12-16T03:28:41.890487419Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 03:28:41.890526 containerd[1596]: time="2025-12-16T03:28:41.890448594Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 03:28:41.891142 kubelet[2782]: E1216 03:28:41.891020 2782 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:28:41.891449 kubelet[2782]: E1216 03:28:41.891228 2782 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:28:41.891591 kubelet[2782]: E1216 03:28:41.891549 2782 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-699cff98b-rrdrk_calico-apiserver(49d300b0-388a-4cb6-b194-17855a6b768b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 03:28:41.892015 kubelet[2782]: E1216 03:28:41.891941 2782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-699cff98b-rrdrk" podUID="49d300b0-388a-4cb6-b194-17855a6b768b" Dec 16 03:28:41.920702 systemd-networkd[1494]: cali6152a00d6a1: Gained IPv6LL Dec 16 03:28:41.924493 kubelet[2782]: E1216 03:28:41.924345 2782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-699cff98b-rrdrk" podUID="49d300b0-388a-4cb6-b194-17855a6b768b" Dec 16 03:28:41.929930 kubelet[2782]: E1216 03:28:41.929843 2782 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:28:41.942734 kubelet[2782]: E1216 03:28:41.942477 2782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-mx7nf" podUID="cff3841d-c916-42d7-ba21-c96ff077d2f0" Dec 16 03:28:42.183000 audit[4799]: NETFILTER_CFG table=filter:133 family=2 entries=14 op=nft_register_rule pid=4799 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:28:42.183000 audit[4799]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd8aa6f140 a2=0 a3=7ffd8aa6f12c items=0 ppid=2935 pid=4799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:42.183000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:28:42.200000 audit[4799]: NETFILTER_CFG table=nat:134 family=2 entries=56 op=nft_register_chain pid=4799 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:28:42.200000 audit[4799]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffd8aa6f140 a2=0 a3=7ffd8aa6f12c items=0 ppid=2935 pid=4799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:42.200000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:28:42.930684 kubelet[2782]: E1216 03:28:42.930630 2782 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:28:42.934304 kubelet[2782]: E1216 03:28:42.934221 2782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-699cff98b-rrdrk" podUID="49d300b0-388a-4cb6-b194-17855a6b768b" Dec 16 03:28:42.935563 kubelet[2782]: E1216 03:28:42.935287 2782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-mx7nf" podUID="cff3841d-c916-42d7-ba21-c96ff077d2f0" Dec 16 03:28:43.934376 kubelet[2782]: E1216 03:28:43.933844 2782 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:28:44.755605 systemd[1]: Started sshd@7-144.126.212.19:22-147.75.109.163:44978.service - OpenSSH per-connection server daemon (147.75.109.163:44978). Dec 16 03:28:44.762404 kernel: kauditd_printk_skb: 159 callbacks suppressed Dec 16 03:28:44.762516 kernel: audit: type=1130 audit(1765855724.755:736): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-144.126.212.19:22-147.75.109.163:44978 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:44.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-144.126.212.19:22-147.75.109.163:44978 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:44.919000 audit[4813]: USER_ACCT pid=4813 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:44.922955 sshd[4813]: Accepted publickey for core from 147.75.109.163 port 44978 ssh2: RSA SHA256:6kOFCGRVte/xmFQ810xFB2aQq7tPvT6sNyInYz0ISZM Dec 16 03:28:44.924263 kernel: audit: type=1101 audit(1765855724.919:737): pid=4813 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:44.925000 audit[4813]: CRED_ACQ pid=4813 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:44.927639 sshd-session[4813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:28:44.931981 kernel: audit: type=1103 audit(1765855724.925:738): pid=4813 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:44.932160 kernel: audit: type=1006 audit(1765855724.925:739): pid=4813 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Dec 16 03:28:44.925000 audit[4813]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd1c2b4780 a2=3 a3=0 items=0 ppid=1 pid=4813 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:44.925000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:28:44.942244 kernel: audit: type=1300 audit(1765855724.925:739): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd1c2b4780 a2=3 a3=0 items=0 ppid=1 pid=4813 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:44.942371 kernel: audit: type=1327 audit(1765855724.925:739): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:28:44.948310 systemd-logind[1576]: New session 9 of user core. Dec 16 03:28:44.964466 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 16 03:28:44.969000 audit[4813]: USER_START pid=4813 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:44.972000 audit[4817]: CRED_ACQ pid=4817 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:44.976266 kernel: audit: type=1105 audit(1765855724.969:740): pid=4813 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:44.976420 kernel: audit: type=1103 audit(1765855724.972:741): pid=4817 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:45.551462 sshd[4817]: Connection closed by 147.75.109.163 port 44978 Dec 16 03:28:45.552709 sshd-session[4813]: pam_unix(sshd:session): session closed for user core Dec 16 03:28:45.558000 audit[4813]: USER_END pid=4813 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:45.564124 kernel: audit: type=1106 audit(1765855725.558:742): pid=4813 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:45.558000 audit[4813]: CRED_DISP pid=4813 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:45.565381 systemd[1]: sshd@7-144.126.212.19:22-147.75.109.163:44978.service: Deactivated successfully. Dec 16 03:28:45.568126 kernel: audit: type=1104 audit(1765855725.558:743): pid=4813 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:45.565000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-144.126.212.19:22-147.75.109.163:44978 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:45.569661 systemd[1]: session-9.scope: Deactivated successfully. Dec 16 03:28:45.571471 systemd-logind[1576]: Session 9 logged out. Waiting for processes to exit. Dec 16 03:28:45.574727 systemd-logind[1576]: Removed session 9. Dec 16 03:28:50.572194 systemd[1]: Started sshd@8-144.126.212.19:22-147.75.109.163:44988.service - OpenSSH per-connection server daemon (147.75.109.163:44988). Dec 16 03:28:50.573638 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 03:28:50.573697 kernel: audit: type=1130 audit(1765855730.572:745): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-144.126.212.19:22-147.75.109.163:44988 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:50.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-144.126.212.19:22-147.75.109.163:44988 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:50.677017 sshd[4841]: Accepted publickey for core from 147.75.109.163 port 44988 ssh2: RSA SHA256:6kOFCGRVte/xmFQ810xFB2aQq7tPvT6sNyInYz0ISZM Dec 16 03:28:50.676000 audit[4841]: USER_ACCT pid=4841 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:50.680422 sshd-session[4841]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:28:50.681135 kernel: audit: type=1101 audit(1765855730.676:746): pid=4841 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:50.678000 audit[4841]: CRED_ACQ pid=4841 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:50.686957 kernel: audit: type=1103 audit(1765855730.678:747): pid=4841 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:50.687135 kernel: audit: type=1006 audit(1765855730.678:748): pid=4841 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Dec 16 03:28:50.678000 audit[4841]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdc1985c50 a2=3 a3=0 items=0 ppid=1 pid=4841 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:50.693356 systemd-logind[1576]: New session 10 of user core. Dec 16 03:28:50.696568 kernel: audit: type=1300 audit(1765855730.678:748): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdc1985c50 a2=3 a3=0 items=0 ppid=1 pid=4841 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:50.696646 kernel: audit: type=1327 audit(1765855730.678:748): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:28:50.678000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:28:50.701641 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 16 03:28:50.705000 audit[4841]: USER_START pid=4841 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:50.708000 audit[4845]: CRED_ACQ pid=4845 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:50.713275 kernel: audit: type=1105 audit(1765855730.705:749): pid=4841 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:50.713409 kernel: audit: type=1103 audit(1765855730.708:750): pid=4845 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:50.847747 sshd[4845]: Connection closed by 147.75.109.163 port 44988 Dec 16 03:28:50.849635 sshd-session[4841]: pam_unix(sshd:session): session closed for user core Dec 16 03:28:50.853000 audit[4841]: USER_END pid=4841 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:50.858662 systemd[1]: sshd@8-144.126.212.19:22-147.75.109.163:44988.service: Deactivated successfully. Dec 16 03:28:50.859113 kernel: audit: type=1106 audit(1765855730.853:751): pid=4841 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:50.853000 audit[4841]: CRED_DISP pid=4841 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:50.864214 systemd[1]: session-10.scope: Deactivated successfully. Dec 16 03:28:50.865812 kernel: audit: type=1104 audit(1765855730.853:752): pid=4841 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:50.859000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-144.126.212.19:22-147.75.109.163:44988 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:50.867957 systemd-logind[1576]: Session 10 logged out. Waiting for processes to exit. Dec 16 03:28:50.869512 systemd-logind[1576]: Removed session 10. Dec 16 03:28:51.403987 containerd[1596]: time="2025-12-16T03:28:51.403928092Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 03:28:51.705477 containerd[1596]: time="2025-12-16T03:28:51.704988527Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:28:51.706379 containerd[1596]: time="2025-12-16T03:28:51.706159562Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 03:28:51.706379 containerd[1596]: time="2025-12-16T03:28:51.706218880Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 03:28:51.706838 kubelet[2782]: E1216 03:28:51.706794 2782 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 03:28:51.707396 kubelet[2782]: E1216 03:28:51.706856 2782 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 03:28:51.707396 kubelet[2782]: E1216 03:28:51.707040 2782 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-64588b6f98-lngtn_calico-system(c131b783-1bf8-4038-b6d0-3485a4a710ea): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 03:28:51.710054 containerd[1596]: time="2025-12-16T03:28:51.709539734Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 03:28:52.016899 containerd[1596]: time="2025-12-16T03:28:52.016753793Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:28:52.017872 containerd[1596]: time="2025-12-16T03:28:52.017722567Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 03:28:52.017872 containerd[1596]: time="2025-12-16T03:28:52.017826364Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 03:28:52.018156 kubelet[2782]: E1216 03:28:52.018108 2782 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 03:28:52.018279 kubelet[2782]: E1216 03:28:52.018167 2782 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 03:28:52.018509 kubelet[2782]: E1216 03:28:52.018398 2782 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-64588b6f98-lngtn_calico-system(c131b783-1bf8-4038-b6d0-3485a4a710ea): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 03:28:52.018580 kubelet[2782]: E1216 03:28:52.018478 2782 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64588b6f98-lngtn" podUID="c131b783-1bf8-4038-b6d0-3485a4a710ea" Dec 16 03:28:52.405901 containerd[1596]: time="2025-12-16T03:28:52.405580347Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 03:28:52.728712 containerd[1596]: time="2025-12-16T03:28:52.728404403Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:28:52.729371 containerd[1596]: time="2025-12-16T03:28:52.729229721Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 03:28:52.729371 containerd[1596]: time="2025-12-16T03:28:52.729335817Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 03:28:52.729822 kubelet[2782]: E1216 03:28:52.729722 2782 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 03:28:52.729822 kubelet[2782]: E1216 03:28:52.729788 2782 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 03:28:52.730760 kubelet[2782]: E1216 03:28:52.730461 2782 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-hrg2x_calico-system(8744312e-a06c-4ec6-97fa-99683d819e93): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 03:28:52.732401 containerd[1596]: time="2025-12-16T03:28:52.732329637Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 03:28:53.058012 containerd[1596]: time="2025-12-16T03:28:53.057957691Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:28:53.059202 containerd[1596]: time="2025-12-16T03:28:53.059027253Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 03:28:53.059202 containerd[1596]: time="2025-12-16T03:28:53.059078783Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 03:28:53.059709 kubelet[2782]: E1216 03:28:53.059399 2782 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 03:28:53.059709 kubelet[2782]: E1216 03:28:53.059460 2782 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 03:28:53.060320 kubelet[2782]: E1216 03:28:53.060284 2782 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-hrg2x_calico-system(8744312e-a06c-4ec6-97fa-99683d819e93): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 03:28:53.060424 kubelet[2782]: E1216 03:28:53.060363 2782 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hrg2x" podUID="8744312e-a06c-4ec6-97fa-99683d819e93" Dec 16 03:28:53.404754 containerd[1596]: time="2025-12-16T03:28:53.403574205Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 03:28:53.732757 containerd[1596]: time="2025-12-16T03:28:53.732525698Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:28:53.733940 containerd[1596]: time="2025-12-16T03:28:53.733868075Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 03:28:53.734037 containerd[1596]: time="2025-12-16T03:28:53.733992816Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 03:28:53.734282 kubelet[2782]: E1216 03:28:53.734229 2782 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 03:28:53.735308 kubelet[2782]: E1216 03:28:53.734300 2782 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 03:28:53.735308 kubelet[2782]: E1216 03:28:53.735001 2782 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-d6f5f89f8-mz87q_calico-system(34973ba8-5ee2-4cc3-a8d0-65270e641be0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 03:28:53.735308 kubelet[2782]: E1216 03:28:53.735052 2782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-d6f5f89f8-mz87q" podUID="34973ba8-5ee2-4cc3-a8d0-65270e641be0" Dec 16 03:28:53.736859 containerd[1596]: time="2025-12-16T03:28:53.734693772Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 03:28:54.076842 containerd[1596]: time="2025-12-16T03:28:54.076688246Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:28:54.077663 containerd[1596]: time="2025-12-16T03:28:54.077601660Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 03:28:54.077895 containerd[1596]: time="2025-12-16T03:28:54.077634469Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 03:28:54.078256 kubelet[2782]: E1216 03:28:54.078144 2782 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:28:54.078487 kubelet[2782]: E1216 03:28:54.078397 2782 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:28:54.078860 kubelet[2782]: E1216 03:28:54.078785 2782 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-699cff98b-z4b6s_calico-apiserver(fb012d08-2ffd-46d4-bf1b-e4743471acfd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 03:28:54.079694 kubelet[2782]: E1216 03:28:54.078841 2782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-699cff98b-z4b6s" podUID="fb012d08-2ffd-46d4-bf1b-e4743471acfd" Dec 16 03:28:55.414457 containerd[1596]: time="2025-12-16T03:28:55.414337790Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 03:28:55.743238 containerd[1596]: time="2025-12-16T03:28:55.742873873Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:28:55.743782 containerd[1596]: time="2025-12-16T03:28:55.743735823Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 03:28:55.743958 containerd[1596]: time="2025-12-16T03:28:55.743753139Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 03:28:55.745507 kubelet[2782]: E1216 03:28:55.745242 2782 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:28:55.745507 kubelet[2782]: E1216 03:28:55.745310 2782 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:28:55.745507 kubelet[2782]: E1216 03:28:55.745403 2782 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-699cff98b-rrdrk_calico-apiserver(49d300b0-388a-4cb6-b194-17855a6b768b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 03:28:55.745507 kubelet[2782]: E1216 03:28:55.745436 2782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-699cff98b-rrdrk" podUID="49d300b0-388a-4cb6-b194-17855a6b768b" Dec 16 03:28:55.867126 systemd[1]: Started sshd@9-144.126.212.19:22-147.75.109.163:35068.service - OpenSSH per-connection server daemon (147.75.109.163:35068). Dec 16 03:28:55.870318 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 03:28:55.870393 kernel: audit: type=1130 audit(1765855735.867:754): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-144.126.212.19:22-147.75.109.163:35068 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:55.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-144.126.212.19:22-147.75.109.163:35068 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:55.977000 audit[4862]: USER_ACCT pid=4862 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:55.977606 sshd[4862]: Accepted publickey for core from 147.75.109.163 port 35068 ssh2: RSA SHA256:6kOFCGRVte/xmFQ810xFB2aQq7tPvT6sNyInYz0ISZM Dec 16 03:28:55.982334 kernel: audit: type=1101 audit(1765855735.977:755): pid=4862 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:55.983000 audit[4862]: CRED_ACQ pid=4862 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:55.986220 sshd-session[4862]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:28:55.989369 kernel: audit: type=1103 audit(1765855735.983:756): pid=4862 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:55.989463 kernel: audit: type=1006 audit(1765855735.983:757): pid=4862 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Dec 16 03:28:55.993636 kernel: audit: type=1300 audit(1765855735.983:757): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd604d0660 a2=3 a3=0 items=0 ppid=1 pid=4862 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:55.983000 audit[4862]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd604d0660 a2=3 a3=0 items=0 ppid=1 pid=4862 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:55.983000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:28:56.003488 kernel: audit: type=1327 audit(1765855735.983:757): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:28:56.013227 systemd-logind[1576]: New session 11 of user core. Dec 16 03:28:56.019680 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 16 03:28:56.026000 audit[4862]: USER_START pid=4862 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:56.034126 kernel: audit: type=1105 audit(1765855736.026:758): pid=4862 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:56.031000 audit[4866]: CRED_ACQ pid=4866 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:56.042164 kernel: audit: type=1103 audit(1765855736.031:759): pid=4866 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:56.205340 sshd[4866]: Connection closed by 147.75.109.163 port 35068 Dec 16 03:28:56.209420 sshd-session[4862]: pam_unix(sshd:session): session closed for user core Dec 16 03:28:56.212000 audit[4862]: USER_END pid=4862 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:56.220197 systemd[1]: sshd@9-144.126.212.19:22-147.75.109.163:35068.service: Deactivated successfully. Dec 16 03:28:56.213000 audit[4862]: CRED_DISP pid=4862 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:56.222708 kernel: audit: type=1106 audit(1765855736.212:760): pid=4862 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:56.222890 kernel: audit: type=1104 audit(1765855736.213:761): pid=4862 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:56.225418 systemd[1]: session-11.scope: Deactivated successfully. Dec 16 03:28:56.220000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-144.126.212.19:22-147.75.109.163:35068 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:56.230849 systemd-logind[1576]: Session 11 logged out. Waiting for processes to exit. Dec 16 03:28:56.236126 systemd[1]: Started sshd@10-144.126.212.19:22-147.75.109.163:35072.service - OpenSSH per-connection server daemon (147.75.109.163:35072). Dec 16 03:28:56.236000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-144.126.212.19:22-147.75.109.163:35072 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:56.238559 systemd-logind[1576]: Removed session 11. Dec 16 03:28:56.366000 audit[4878]: USER_ACCT pid=4878 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:56.366451 sshd[4878]: Accepted publickey for core from 147.75.109.163 port 35072 ssh2: RSA SHA256:6kOFCGRVte/xmFQ810xFB2aQq7tPvT6sNyInYz0ISZM Dec 16 03:28:56.367000 audit[4878]: CRED_ACQ pid=4878 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:56.367000 audit[4878]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe25316fc0 a2=3 a3=0 items=0 ppid=1 pid=4878 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:56.367000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:28:56.369030 sshd-session[4878]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:28:56.375540 systemd-logind[1576]: New session 12 of user core. Dec 16 03:28:56.385493 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 16 03:28:56.389000 audit[4878]: USER_START pid=4878 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:56.391000 audit[4882]: CRED_ACQ pid=4882 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:56.596926 sshd[4882]: Connection closed by 147.75.109.163 port 35072 Dec 16 03:28:56.598276 sshd-session[4878]: pam_unix(sshd:session): session closed for user core Dec 16 03:28:56.603000 audit[4878]: USER_END pid=4878 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:56.603000 audit[4878]: CRED_DISP pid=4878 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:56.620000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-144.126.212.19:22-147.75.109.163:35072 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:56.619351 systemd[1]: sshd@10-144.126.212.19:22-147.75.109.163:35072.service: Deactivated successfully. Dec 16 03:28:56.627684 systemd[1]: session-12.scope: Deactivated successfully. Dec 16 03:28:56.630754 systemd-logind[1576]: Session 12 logged out. Waiting for processes to exit. Dec 16 03:28:56.639622 systemd[1]: Started sshd@11-144.126.212.19:22-147.75.109.163:35076.service - OpenSSH per-connection server daemon (147.75.109.163:35076). Dec 16 03:28:56.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-144.126.212.19:22-147.75.109.163:35076 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:56.643220 systemd-logind[1576]: Removed session 12. Dec 16 03:28:56.739000 audit[4891]: USER_ACCT pid=4891 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:56.739994 sshd[4891]: Accepted publickey for core from 147.75.109.163 port 35076 ssh2: RSA SHA256:6kOFCGRVte/xmFQ810xFB2aQq7tPvT6sNyInYz0ISZM Dec 16 03:28:56.742000 audit[4891]: CRED_ACQ pid=4891 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:56.742000 audit[4891]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe6dfc7b90 a2=3 a3=0 items=0 ppid=1 pid=4891 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:56.742000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:28:56.744191 sshd-session[4891]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:28:56.752702 systemd-logind[1576]: New session 13 of user core. Dec 16 03:28:56.760431 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 16 03:28:56.765000 audit[4891]: USER_START pid=4891 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:56.769000 audit[4895]: CRED_ACQ pid=4895 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:56.951995 sshd[4895]: Connection closed by 147.75.109.163 port 35076 Dec 16 03:28:56.951777 sshd-session[4891]: pam_unix(sshd:session): session closed for user core Dec 16 03:28:56.955000 audit[4891]: USER_END pid=4891 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:56.956000 audit[4891]: CRED_DISP pid=4891 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:56.961920 systemd[1]: sshd@11-144.126.212.19:22-147.75.109.163:35076.service: Deactivated successfully. Dec 16 03:28:56.963000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-144.126.212.19:22-147.75.109.163:35076 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:56.969367 systemd[1]: session-13.scope: Deactivated successfully. Dec 16 03:28:56.974510 systemd-logind[1576]: Session 13 logged out. Waiting for processes to exit. Dec 16 03:28:56.977637 systemd-logind[1576]: Removed session 13. Dec 16 03:28:57.402365 containerd[1596]: time="2025-12-16T03:28:57.402287538Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 03:28:57.707935 containerd[1596]: time="2025-12-16T03:28:57.707757135Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:28:57.708664 containerd[1596]: time="2025-12-16T03:28:57.708601271Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 03:28:57.708796 containerd[1596]: time="2025-12-16T03:28:57.708718859Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 03:28:57.709008 kubelet[2782]: E1216 03:28:57.708959 2782 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 03:28:57.709401 kubelet[2782]: E1216 03:28:57.709026 2782 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 03:28:57.709401 kubelet[2782]: E1216 03:28:57.709163 2782 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-mx7nf_calico-system(cff3841d-c916-42d7-ba21-c96ff077d2f0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 03:28:57.709401 kubelet[2782]: E1216 03:28:57.709225 2782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-mx7nf" podUID="cff3841d-c916-42d7-ba21-c96ff077d2f0" Dec 16 03:29:01.977420 kernel: kauditd_printk_skb: 23 callbacks suppressed Dec 16 03:29:01.977580 kernel: audit: type=1130 audit(1765855741.971:781): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-144.126.212.19:22-147.75.109.163:35092 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:29:01.971000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-144.126.212.19:22-147.75.109.163:35092 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:29:01.972226 systemd[1]: Started sshd@12-144.126.212.19:22-147.75.109.163:35092.service - OpenSSH per-connection server daemon (147.75.109.163:35092). Dec 16 03:29:02.045000 audit[4921]: USER_ACCT pid=4921 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:02.046986 sshd[4921]: Accepted publickey for core from 147.75.109.163 port 35092 ssh2: RSA SHA256:6kOFCGRVte/xmFQ810xFB2aQq7tPvT6sNyInYz0ISZM Dec 16 03:29:02.050330 sshd-session[4921]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:29:02.047000 audit[4921]: CRED_ACQ pid=4921 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:02.055066 kernel: audit: type=1101 audit(1765855742.045:782): pid=4921 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:02.055225 kernel: audit: type=1103 audit(1765855742.047:783): pid=4921 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:02.059120 kernel: audit: type=1006 audit(1765855742.047:784): pid=4921 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Dec 16 03:29:02.047000 audit[4921]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff351fbd90 a2=3 a3=0 items=0 ppid=1 pid=4921 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:02.066543 kernel: audit: type=1300 audit(1765855742.047:784): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff351fbd90 a2=3 a3=0 items=0 ppid=1 pid=4921 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:02.066695 kernel: audit: type=1327 audit(1765855742.047:784): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:29:02.047000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:29:02.073592 systemd-logind[1576]: New session 14 of user core. Dec 16 03:29:02.077378 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 16 03:29:02.083000 audit[4921]: USER_START pid=4921 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:02.095161 kernel: audit: type=1105 audit(1765855742.083:785): pid=4921 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:02.095414 kernel: audit: type=1103 audit(1765855742.087:786): pid=4925 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:02.087000 audit[4925]: CRED_ACQ pid=4925 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:02.187137 sshd[4925]: Connection closed by 147.75.109.163 port 35092 Dec 16 03:29:02.187964 sshd-session[4921]: pam_unix(sshd:session): session closed for user core Dec 16 03:29:02.188000 audit[4921]: USER_END pid=4921 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:02.196150 kernel: audit: type=1106 audit(1765855742.188:787): pid=4921 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:02.188000 audit[4921]: CRED_DISP pid=4921 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:02.198409 systemd[1]: sshd@12-144.126.212.19:22-147.75.109.163:35092.service: Deactivated successfully. Dec 16 03:29:02.201614 systemd[1]: session-14.scope: Deactivated successfully. Dec 16 03:29:02.196000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-144.126.212.19:22-147.75.109.163:35092 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:29:02.202278 kernel: audit: type=1104 audit(1765855742.188:788): pid=4921 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:02.203943 systemd-logind[1576]: Session 14 logged out. Waiting for processes to exit. Dec 16 03:29:02.206540 systemd-logind[1576]: Removed session 14. Dec 16 03:29:04.403941 kubelet[2782]: E1216 03:29:04.403589 2782 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:29:04.405945 kubelet[2782]: E1216 03:29:04.405478 2782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-d6f5f89f8-mz87q" podUID="34973ba8-5ee2-4cc3-a8d0-65270e641be0" Dec 16 03:29:04.406474 kubelet[2782]: E1216 03:29:04.406423 2782 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hrg2x" podUID="8744312e-a06c-4ec6-97fa-99683d819e93" Dec 16 03:29:05.945939 kubelet[2782]: E1216 03:29:05.944898 2782 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:29:06.401124 kubelet[2782]: E1216 03:29:06.401006 2782 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:29:07.209546 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 03:29:07.209726 kernel: audit: type=1130 audit(1765855747.207:790): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-144.126.212.19:22-147.75.109.163:36012 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:29:07.207000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-144.126.212.19:22-147.75.109.163:36012 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:29:07.208199 systemd[1]: Started sshd@13-144.126.212.19:22-147.75.109.163:36012.service - OpenSSH per-connection server daemon (147.75.109.163:36012). Dec 16 03:29:07.353205 sshd[4963]: Accepted publickey for core from 147.75.109.163 port 36012 ssh2: RSA SHA256:6kOFCGRVte/xmFQ810xFB2aQq7tPvT6sNyInYz0ISZM Dec 16 03:29:07.351000 audit[4963]: USER_ACCT pid=4963 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:07.357667 sshd-session[4963]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:29:07.360144 kernel: audit: type=1101 audit(1765855747.351:791): pid=4963 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:07.354000 audit[4963]: CRED_ACQ pid=4963 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:07.367354 kernel: audit: type=1103 audit(1765855747.354:792): pid=4963 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:07.367498 kernel: audit: type=1006 audit(1765855747.354:793): pid=4963 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Dec 16 03:29:07.354000 audit[4963]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffca74c5090 a2=3 a3=0 items=0 ppid=1 pid=4963 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:07.374478 kernel: audit: type=1300 audit(1765855747.354:793): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffca74c5090 a2=3 a3=0 items=0 ppid=1 pid=4963 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:07.375169 kernel: audit: type=1327 audit(1765855747.354:793): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:29:07.354000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:29:07.380953 systemd-logind[1576]: New session 15 of user core. Dec 16 03:29:07.390543 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 16 03:29:07.395000 audit[4963]: USER_START pid=4963 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:07.403188 kernel: audit: type=1105 audit(1765855747.395:794): pid=4963 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:07.402000 audit[4967]: CRED_ACQ pid=4967 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:07.409405 kernel: audit: type=1103 audit(1765855747.402:795): pid=4967 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:07.410018 kubelet[2782]: E1216 03:29:07.409675 2782 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64588b6f98-lngtn" podUID="c131b783-1bf8-4038-b6d0-3485a4a710ea" Dec 16 03:29:07.567192 sshd[4967]: Connection closed by 147.75.109.163 port 36012 Dec 16 03:29:07.565710 sshd-session[4963]: pam_unix(sshd:session): session closed for user core Dec 16 03:29:07.568000 audit[4963]: USER_END pid=4963 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:07.577157 kernel: audit: type=1106 audit(1765855747.568:796): pid=4963 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:07.582186 kernel: audit: type=1104 audit(1765855747.568:797): pid=4963 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:07.568000 audit[4963]: CRED_DISP pid=4963 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:07.577387 systemd[1]: sshd@13-144.126.212.19:22-147.75.109.163:36012.service: Deactivated successfully. Dec 16 03:29:07.583390 systemd[1]: session-15.scope: Deactivated successfully. Dec 16 03:29:07.576000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-144.126.212.19:22-147.75.109.163:36012 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:29:07.586343 systemd-logind[1576]: Session 15 logged out. Waiting for processes to exit. Dec 16 03:29:07.591531 systemd-logind[1576]: Removed session 15. Dec 16 03:29:09.402440 kubelet[2782]: E1216 03:29:09.402385 2782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-699cff98b-rrdrk" podUID="49d300b0-388a-4cb6-b194-17855a6b768b" Dec 16 03:29:09.404222 kubelet[2782]: E1216 03:29:09.404185 2782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-699cff98b-z4b6s" podUID="fb012d08-2ffd-46d4-bf1b-e4743471acfd" Dec 16 03:29:10.402842 kubelet[2782]: E1216 03:29:10.402738 2782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-mx7nf" podUID="cff3841d-c916-42d7-ba21-c96ff077d2f0" Dec 16 03:29:12.585382 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 03:29:12.585534 kernel: audit: type=1130 audit(1765855752.580:799): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-144.126.212.19:22-147.75.109.163:57678 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:29:12.580000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-144.126.212.19:22-147.75.109.163:57678 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:29:12.581694 systemd[1]: Started sshd@14-144.126.212.19:22-147.75.109.163:57678.service - OpenSSH per-connection server daemon (147.75.109.163:57678). Dec 16 03:29:12.659000 audit[4982]: USER_ACCT pid=4982 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:12.660570 sshd[4982]: Accepted publickey for core from 147.75.109.163 port 57678 ssh2: RSA SHA256:6kOFCGRVte/xmFQ810xFB2aQq7tPvT6sNyInYz0ISZM Dec 16 03:29:12.663000 audit[4982]: CRED_ACQ pid=4982 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:12.666176 sshd-session[4982]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:29:12.666504 kernel: audit: type=1101 audit(1765855752.659:800): pid=4982 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:12.666544 kernel: audit: type=1103 audit(1765855752.663:801): pid=4982 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:12.675510 kernel: audit: type=1006 audit(1765855752.663:802): pid=4982 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Dec 16 03:29:12.675616 kernel: audit: type=1300 audit(1765855752.663:802): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd7975c170 a2=3 a3=0 items=0 ppid=1 pid=4982 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:12.663000 audit[4982]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd7975c170 a2=3 a3=0 items=0 ppid=1 pid=4982 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:12.663000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:29:12.680608 kernel: audit: type=1327 audit(1765855752.663:802): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:29:12.683704 systemd-logind[1576]: New session 16 of user core. Dec 16 03:29:12.692421 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 16 03:29:12.695000 audit[4982]: USER_START pid=4982 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:12.699000 audit[4986]: CRED_ACQ pid=4986 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:12.704803 kernel: audit: type=1105 audit(1765855752.695:803): pid=4982 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:12.704951 kernel: audit: type=1103 audit(1765855752.699:804): pid=4986 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:12.821564 sshd[4986]: Connection closed by 147.75.109.163 port 57678 Dec 16 03:29:12.823410 sshd-session[4982]: pam_unix(sshd:session): session closed for user core Dec 16 03:29:12.825000 audit[4982]: USER_END pid=4982 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:12.831621 systemd[1]: sshd@14-144.126.212.19:22-147.75.109.163:57678.service: Deactivated successfully. Dec 16 03:29:12.825000 audit[4982]: CRED_DISP pid=4982 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:12.836196 systemd[1]: session-16.scope: Deactivated successfully. Dec 16 03:29:12.837866 kernel: audit: type=1106 audit(1765855752.825:805): pid=4982 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:12.837993 kernel: audit: type=1104 audit(1765855752.825:806): pid=4982 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:12.830000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-144.126.212.19:22-147.75.109.163:57678 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:29:12.842048 systemd-logind[1576]: Session 16 logged out. Waiting for processes to exit. Dec 16 03:29:12.844706 systemd-logind[1576]: Removed session 16. Dec 16 03:29:16.404452 containerd[1596]: time="2025-12-16T03:29:16.404212350Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 03:29:16.770923 containerd[1596]: time="2025-12-16T03:29:16.770739833Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:29:16.771938 containerd[1596]: time="2025-12-16T03:29:16.771867189Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 03:29:16.772047 containerd[1596]: time="2025-12-16T03:29:16.772011982Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 03:29:16.772501 kubelet[2782]: E1216 03:29:16.772418 2782 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 03:29:16.773012 kubelet[2782]: E1216 03:29:16.772529 2782 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 03:29:16.773415 kubelet[2782]: E1216 03:29:16.773038 2782 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-d6f5f89f8-mz87q_calico-system(34973ba8-5ee2-4cc3-a8d0-65270e641be0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 03:29:16.773415 kubelet[2782]: E1216 03:29:16.773113 2782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-d6f5f89f8-mz87q" podUID="34973ba8-5ee2-4cc3-a8d0-65270e641be0" Dec 16 03:29:17.838560 systemd[1]: Started sshd@15-144.126.212.19:22-147.75.109.163:57688.service - OpenSSH per-connection server daemon (147.75.109.163:57688). Dec 16 03:29:17.844839 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 03:29:17.845047 kernel: audit: type=1130 audit(1765855757.838:808): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-144.126.212.19:22-147.75.109.163:57688 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:29:17.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-144.126.212.19:22-147.75.109.163:57688 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:29:17.993000 audit[4998]: USER_ACCT pid=4998 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:17.996851 sshd[4998]: Accepted publickey for core from 147.75.109.163 port 57688 ssh2: RSA SHA256:6kOFCGRVte/xmFQ810xFB2aQq7tPvT6sNyInYz0ISZM Dec 16 03:29:17.997687 sshd-session[4998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:29:17.994000 audit[4998]: CRED_ACQ pid=4998 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:18.000146 kernel: audit: type=1101 audit(1765855757.993:809): pid=4998 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:18.000244 kernel: audit: type=1103 audit(1765855757.994:810): pid=4998 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:18.004248 kernel: audit: type=1006 audit(1765855757.994:811): pid=4998 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Dec 16 03:29:17.994000 audit[4998]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc69842b10 a2=3 a3=0 items=0 ppid=1 pid=4998 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:18.007562 kernel: audit: type=1300 audit(1765855757.994:811): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc69842b10 a2=3 a3=0 items=0 ppid=1 pid=4998 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:18.008214 systemd-logind[1576]: New session 17 of user core. Dec 16 03:29:17.994000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:29:18.011353 kernel: audit: type=1327 audit(1765855757.994:811): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:29:18.013362 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 16 03:29:18.017000 audit[4998]: USER_START pid=4998 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:18.024122 kernel: audit: type=1105 audit(1765855758.017:812): pid=4998 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:18.025000 audit[5002]: CRED_ACQ pid=5002 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:18.032313 kernel: audit: type=1103 audit(1765855758.025:813): pid=5002 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:18.149070 sshd[5002]: Connection closed by 147.75.109.163 port 57688 Dec 16 03:29:18.149691 sshd-session[4998]: pam_unix(sshd:session): session closed for user core Dec 16 03:29:18.150000 audit[4998]: USER_END pid=4998 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:18.150000 audit[4998]: CRED_DISP pid=4998 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:18.160040 kernel: audit: type=1106 audit(1765855758.150:814): pid=4998 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:18.160277 kernel: audit: type=1104 audit(1765855758.150:815): pid=4998 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:18.168935 systemd[1]: sshd@15-144.126.212.19:22-147.75.109.163:57688.service: Deactivated successfully. Dec 16 03:29:18.168000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-144.126.212.19:22-147.75.109.163:57688 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:29:18.172712 systemd[1]: session-17.scope: Deactivated successfully. Dec 16 03:29:18.177442 systemd-logind[1576]: Session 17 logged out. Waiting for processes to exit. Dec 16 03:29:18.183000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-144.126.212.19:22-147.75.109.163:57694 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:29:18.184003 systemd[1]: Started sshd@16-144.126.212.19:22-147.75.109.163:57694.service - OpenSSH per-connection server daemon (147.75.109.163:57694). Dec 16 03:29:18.185963 systemd-logind[1576]: Removed session 17. Dec 16 03:29:18.264000 audit[5013]: USER_ACCT pid=5013 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:18.266000 audit[5013]: CRED_ACQ pid=5013 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:18.266000 audit[5013]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd93f393d0 a2=3 a3=0 items=0 ppid=1 pid=5013 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:18.266000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:29:18.268446 sshd[5013]: Accepted publickey for core from 147.75.109.163 port 57694 ssh2: RSA SHA256:6kOFCGRVte/xmFQ810xFB2aQq7tPvT6sNyInYz0ISZM Dec 16 03:29:18.268569 sshd-session[5013]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:29:18.278417 systemd-logind[1576]: New session 18 of user core. Dec 16 03:29:18.282452 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 16 03:29:18.286000 audit[5013]: USER_START pid=5013 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:18.289000 audit[5017]: CRED_ACQ pid=5017 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:18.772280 sshd[5017]: Connection closed by 147.75.109.163 port 57694 Dec 16 03:29:18.774408 sshd-session[5013]: pam_unix(sshd:session): session closed for user core Dec 16 03:29:18.776000 audit[5013]: USER_END pid=5013 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:18.776000 audit[5013]: CRED_DISP pid=5013 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:18.787351 systemd[1]: sshd@16-144.126.212.19:22-147.75.109.163:57694.service: Deactivated successfully. Dec 16 03:29:18.786000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-144.126.212.19:22-147.75.109.163:57694 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:29:18.791804 systemd[1]: session-18.scope: Deactivated successfully. Dec 16 03:29:18.793640 systemd-logind[1576]: Session 18 logged out. Waiting for processes to exit. Dec 16 03:29:18.799227 systemd[1]: Started sshd@17-144.126.212.19:22-147.75.109.163:57700.service - OpenSSH per-connection server daemon (147.75.109.163:57700). Dec 16 03:29:18.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-144.126.212.19:22-147.75.109.163:57700 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:29:18.802108 systemd-logind[1576]: Removed session 18. Dec 16 03:29:18.902000 audit[5028]: USER_ACCT pid=5028 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:18.904012 sshd[5028]: Accepted publickey for core from 147.75.109.163 port 57700 ssh2: RSA SHA256:6kOFCGRVte/xmFQ810xFB2aQq7tPvT6sNyInYz0ISZM Dec 16 03:29:18.904000 audit[5028]: CRED_ACQ pid=5028 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:18.904000 audit[5028]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffdc7dccb0 a2=3 a3=0 items=0 ppid=1 pid=5028 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:18.904000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:29:18.906955 sshd-session[5028]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:29:18.915182 systemd-logind[1576]: New session 19 of user core. Dec 16 03:29:18.921349 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 16 03:29:18.924000 audit[5028]: USER_START pid=5028 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:18.927000 audit[5032]: CRED_ACQ pid=5032 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:19.444627 kubelet[2782]: E1216 03:29:19.443352 2782 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:29:19.447951 containerd[1596]: time="2025-12-16T03:29:19.444129058Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 03:29:19.694000 audit[5049]: NETFILTER_CFG table=filter:135 family=2 entries=14 op=nft_register_rule pid=5049 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:29:19.694000 audit[5049]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffee9d8f9a0 a2=0 a3=7ffee9d8f98c items=0 ppid=2935 pid=5049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:19.694000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:29:19.703000 audit[5049]: NETFILTER_CFG table=nat:136 family=2 entries=20 op=nft_register_rule pid=5049 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:29:19.703000 audit[5049]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffee9d8f9a0 a2=0 a3=7ffee9d8f98c items=0 ppid=2935 pid=5049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:19.703000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:29:19.709719 sshd[5032]: Connection closed by 147.75.109.163 port 57700 Dec 16 03:29:19.708637 sshd-session[5028]: pam_unix(sshd:session): session closed for user core Dec 16 03:29:19.711000 audit[5028]: USER_END pid=5028 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:19.712000 audit[5028]: CRED_DISP pid=5028 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:19.724784 systemd[1]: sshd@17-144.126.212.19:22-147.75.109.163:57700.service: Deactivated successfully. Dec 16 03:29:19.724000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-144.126.212.19:22-147.75.109.163:57700 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:29:19.729786 systemd[1]: session-19.scope: Deactivated successfully. Dec 16 03:29:19.731984 systemd-logind[1576]: Session 19 logged out. Waiting for processes to exit. Dec 16 03:29:19.741786 systemd[1]: Started sshd@18-144.126.212.19:22-147.75.109.163:57706.service - OpenSSH per-connection server daemon (147.75.109.163:57706). Dec 16 03:29:19.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-144.126.212.19:22-147.75.109.163:57706 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:29:19.744917 systemd-logind[1576]: Removed session 19. Dec 16 03:29:19.772897 containerd[1596]: time="2025-12-16T03:29:19.772645386Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:29:19.773545 containerd[1596]: time="2025-12-16T03:29:19.773439171Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 03:29:19.773545 containerd[1596]: time="2025-12-16T03:29:19.773503902Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 03:29:19.773963 kubelet[2782]: E1216 03:29:19.773701 2782 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 03:29:19.773963 kubelet[2782]: E1216 03:29:19.773779 2782 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 03:29:19.773963 kubelet[2782]: E1216 03:29:19.773883 2782 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-hrg2x_calico-system(8744312e-a06c-4ec6-97fa-99683d819e93): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 03:29:19.777778 containerd[1596]: time="2025-12-16T03:29:19.777722116Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 03:29:19.844000 audit[5054]: USER_ACCT pid=5054 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:19.846227 sshd[5054]: Accepted publickey for core from 147.75.109.163 port 57706 ssh2: RSA SHA256:6kOFCGRVte/xmFQ810xFB2aQq7tPvT6sNyInYz0ISZM Dec 16 03:29:19.846000 audit[5054]: CRED_ACQ pid=5054 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:19.846000 audit[5054]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcd999e1d0 a2=3 a3=0 items=0 ppid=1 pid=5054 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:19.846000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:29:19.848744 sshd-session[5054]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:29:19.855783 systemd-logind[1576]: New session 20 of user core. Dec 16 03:29:19.863750 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 16 03:29:19.867000 audit[5054]: USER_START pid=5054 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:19.870000 audit[5058]: CRED_ACQ pid=5058 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:20.128298 containerd[1596]: time="2025-12-16T03:29:20.128234030Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:29:20.129537 containerd[1596]: time="2025-12-16T03:29:20.129461495Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 03:29:20.129951 containerd[1596]: time="2025-12-16T03:29:20.129568511Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 03:29:20.131748 kubelet[2782]: E1216 03:29:20.129886 2782 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 03:29:20.131748 kubelet[2782]: E1216 03:29:20.129945 2782 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 03:29:20.131748 kubelet[2782]: E1216 03:29:20.130050 2782 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-hrg2x_calico-system(8744312e-a06c-4ec6-97fa-99683d819e93): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 03:29:20.131748 kubelet[2782]: E1216 03:29:20.130118 2782 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hrg2x" podUID="8744312e-a06c-4ec6-97fa-99683d819e93" Dec 16 03:29:20.362014 sshd[5058]: Connection closed by 147.75.109.163 port 57706 Dec 16 03:29:20.362452 sshd-session[5054]: pam_unix(sshd:session): session closed for user core Dec 16 03:29:20.366000 audit[5054]: USER_END pid=5054 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:20.367000 audit[5054]: CRED_DISP pid=5054 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:20.381000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-144.126.212.19:22-147.75.109.163:57706 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:29:20.381808 systemd[1]: sshd@18-144.126.212.19:22-147.75.109.163:57706.service: Deactivated successfully. Dec 16 03:29:20.387939 systemd[1]: session-20.scope: Deactivated successfully. Dec 16 03:29:20.391837 systemd-logind[1576]: Session 20 logged out. Waiting for processes to exit. Dec 16 03:29:20.401549 systemd-logind[1576]: Removed session 20. Dec 16 03:29:20.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-144.126.212.19:22-147.75.109.163:57716 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:29:20.405468 systemd[1]: Started sshd@19-144.126.212.19:22-147.75.109.163:57716.service - OpenSSH per-connection server daemon (147.75.109.163:57716). Dec 16 03:29:20.416325 containerd[1596]: time="2025-12-16T03:29:20.416104968Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 03:29:20.514000 audit[5068]: USER_ACCT pid=5068 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:20.516250 sshd[5068]: Accepted publickey for core from 147.75.109.163 port 57716 ssh2: RSA SHA256:6kOFCGRVte/xmFQ810xFB2aQq7tPvT6sNyInYz0ISZM Dec 16 03:29:20.516000 audit[5068]: CRED_ACQ pid=5068 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:20.516000 audit[5068]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff5b629320 a2=3 a3=0 items=0 ppid=1 pid=5068 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:20.516000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:29:20.519032 sshd-session[5068]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:29:20.526189 systemd-logind[1576]: New session 21 of user core. Dec 16 03:29:20.536437 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 16 03:29:20.539000 audit[5068]: USER_START pid=5068 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:20.542000 audit[5072]: CRED_ACQ pid=5072 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:20.665770 sshd[5072]: Connection closed by 147.75.109.163 port 57716 Dec 16 03:29:20.666384 sshd-session[5068]: pam_unix(sshd:session): session closed for user core Dec 16 03:29:20.667000 audit[5068]: USER_END pid=5068 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:20.668000 audit[5068]: CRED_DISP pid=5068 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:20.673200 systemd[1]: sshd@19-144.126.212.19:22-147.75.109.163:57716.service: Deactivated successfully. Dec 16 03:29:20.672000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-144.126.212.19:22-147.75.109.163:57716 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:29:20.676600 systemd[1]: session-21.scope: Deactivated successfully. Dec 16 03:29:20.678556 systemd-logind[1576]: Session 21 logged out. Waiting for processes to exit. Dec 16 03:29:20.679863 systemd-logind[1576]: Removed session 21. Dec 16 03:29:20.725000 audit[5083]: NETFILTER_CFG table=filter:137 family=2 entries=26 op=nft_register_rule pid=5083 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:29:20.725000 audit[5083]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7fff3a1e06f0 a2=0 a3=7fff3a1e06dc items=0 ppid=2935 pid=5083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:20.725000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:29:20.729479 containerd[1596]: time="2025-12-16T03:29:20.729262180Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:29:20.730540 containerd[1596]: time="2025-12-16T03:29:20.730150857Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 03:29:20.730540 containerd[1596]: time="2025-12-16T03:29:20.730272402Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 03:29:20.730661 kubelet[2782]: E1216 03:29:20.730468 2782 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:29:20.730661 kubelet[2782]: E1216 03:29:20.730518 2782 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:29:20.730661 kubelet[2782]: E1216 03:29:20.730599 2782 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-699cff98b-rrdrk_calico-apiserver(49d300b0-388a-4cb6-b194-17855a6b768b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 03:29:20.730661 kubelet[2782]: E1216 03:29:20.730632 2782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-699cff98b-rrdrk" podUID="49d300b0-388a-4cb6-b194-17855a6b768b" Dec 16 03:29:20.734000 audit[5083]: NETFILTER_CFG table=nat:138 family=2 entries=20 op=nft_register_rule pid=5083 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:29:20.734000 audit[5083]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff3a1e06f0 a2=0 a3=0 items=0 ppid=2935 pid=5083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:20.734000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:29:22.412440 containerd[1596]: time="2025-12-16T03:29:22.412382848Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 03:29:22.725743 containerd[1596]: time="2025-12-16T03:29:22.725573240Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:29:22.726665 containerd[1596]: time="2025-12-16T03:29:22.726616113Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 03:29:22.726943 containerd[1596]: time="2025-12-16T03:29:22.726720253Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 03:29:22.727051 kubelet[2782]: E1216 03:29:22.726934 2782 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 03:29:22.727051 kubelet[2782]: E1216 03:29:22.726983 2782 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 03:29:22.727451 kubelet[2782]: E1216 03:29:22.727191 2782 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-mx7nf_calico-system(cff3841d-c916-42d7-ba21-c96ff077d2f0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 03:29:22.727451 kubelet[2782]: E1216 03:29:22.727228 2782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-mx7nf" podUID="cff3841d-c916-42d7-ba21-c96ff077d2f0" Dec 16 03:29:22.728206 containerd[1596]: time="2025-12-16T03:29:22.727737965Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 03:29:23.042864 containerd[1596]: time="2025-12-16T03:29:23.042769075Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:29:23.043956 containerd[1596]: time="2025-12-16T03:29:23.043810548Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 03:29:23.043956 containerd[1596]: time="2025-12-16T03:29:23.043926185Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 03:29:23.044216 kubelet[2782]: E1216 03:29:23.044173 2782 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 03:29:23.044265 kubelet[2782]: E1216 03:29:23.044226 2782 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 03:29:23.044346 kubelet[2782]: E1216 03:29:23.044310 2782 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-64588b6f98-lngtn_calico-system(c131b783-1bf8-4038-b6d0-3485a4a710ea): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 03:29:23.046584 containerd[1596]: time="2025-12-16T03:29:23.046517940Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 03:29:23.384854 containerd[1596]: time="2025-12-16T03:29:23.384660065Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:29:23.385617 containerd[1596]: time="2025-12-16T03:29:23.385570102Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 03:29:23.385707 containerd[1596]: time="2025-12-16T03:29:23.385671922Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 03:29:23.385957 kubelet[2782]: E1216 03:29:23.385851 2782 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 03:29:23.385957 kubelet[2782]: E1216 03:29:23.385904 2782 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 03:29:23.386073 kubelet[2782]: E1216 03:29:23.385988 2782 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-64588b6f98-lngtn_calico-system(c131b783-1bf8-4038-b6d0-3485a4a710ea): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 03:29:23.386073 kubelet[2782]: E1216 03:29:23.386029 2782 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64588b6f98-lngtn" podUID="c131b783-1bf8-4038-b6d0-3485a4a710ea" Dec 16 03:29:23.403274 containerd[1596]: time="2025-12-16T03:29:23.403043204Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 03:29:23.736305 containerd[1596]: time="2025-12-16T03:29:23.736065270Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:29:23.737505 containerd[1596]: time="2025-12-16T03:29:23.737281627Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 03:29:23.737505 containerd[1596]: time="2025-12-16T03:29:23.737351016Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 03:29:23.739272 kubelet[2782]: E1216 03:29:23.737552 2782 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:29:23.739272 kubelet[2782]: E1216 03:29:23.737602 2782 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:29:23.739272 kubelet[2782]: E1216 03:29:23.737687 2782 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-699cff98b-z4b6s_calico-apiserver(fb012d08-2ffd-46d4-bf1b-e4743471acfd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 03:29:23.739272 kubelet[2782]: E1216 03:29:23.737722 2782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-699cff98b-z4b6s" podUID="fb012d08-2ffd-46d4-bf1b-e4743471acfd" Dec 16 03:29:25.164000 audit[5087]: NETFILTER_CFG table=filter:139 family=2 entries=26 op=nft_register_rule pid=5087 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:29:25.167175 kernel: kauditd_printk_skb: 57 callbacks suppressed Dec 16 03:29:25.167297 kernel: audit: type=1325 audit(1765855765.164:857): table=filter:139 family=2 entries=26 op=nft_register_rule pid=5087 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:29:25.164000 audit[5087]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe86c2f100 a2=0 a3=7ffe86c2f0ec items=0 ppid=2935 pid=5087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:25.172624 kernel: audit: type=1300 audit(1765855765.164:857): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe86c2f100 a2=0 a3=7ffe86c2f0ec items=0 ppid=2935 pid=5087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:25.164000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:29:25.176797 kernel: audit: type=1327 audit(1765855765.164:857): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:29:25.177000 audit[5087]: NETFILTER_CFG table=nat:140 family=2 entries=104 op=nft_register_chain pid=5087 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:29:25.177000 audit[5087]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffe86c2f100 a2=0 a3=7ffe86c2f0ec items=0 ppid=2935 pid=5087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:25.184669 kernel: audit: type=1325 audit(1765855765.177:858): table=nat:140 family=2 entries=104 op=nft_register_chain pid=5087 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:29:25.184757 kernel: audit: type=1300 audit(1765855765.177:858): arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffe86c2f100 a2=0 a3=7ffe86c2f0ec items=0 ppid=2935 pid=5087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:25.177000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:29:25.188917 kernel: audit: type=1327 audit(1765855765.177:858): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:29:25.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-144.126.212.19:22-147.75.109.163:37270 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:29:25.689540 systemd[1]: Started sshd@20-144.126.212.19:22-147.75.109.163:37270.service - OpenSSH per-connection server daemon (147.75.109.163:37270). Dec 16 03:29:25.696409 kernel: audit: type=1130 audit(1765855765.688:859): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-144.126.212.19:22-147.75.109.163:37270 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:29:25.769000 audit[5089]: USER_ACCT pid=5089 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:25.772129 sshd[5089]: Accepted publickey for core from 147.75.109.163 port 37270 ssh2: RSA SHA256:6kOFCGRVte/xmFQ810xFB2aQq7tPvT6sNyInYz0ISZM Dec 16 03:29:25.773000 audit[5089]: CRED_ACQ pid=5089 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:25.777141 sshd-session[5089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:29:25.778816 kernel: audit: type=1101 audit(1765855765.769:860): pid=5089 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:25.778936 kernel: audit: type=1103 audit(1765855765.773:861): pid=5089 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:25.781854 kernel: audit: type=1006 audit(1765855765.773:862): pid=5089 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Dec 16 03:29:25.773000 audit[5089]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd9a24ebd0 a2=3 a3=0 items=0 ppid=1 pid=5089 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:25.773000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:29:25.790213 systemd-logind[1576]: New session 22 of user core. Dec 16 03:29:25.800538 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 16 03:29:25.804000 audit[5089]: USER_START pid=5089 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:25.807000 audit[5093]: CRED_ACQ pid=5093 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:25.911184 sshd[5093]: Connection closed by 147.75.109.163 port 37270 Dec 16 03:29:25.912148 sshd-session[5089]: pam_unix(sshd:session): session closed for user core Dec 16 03:29:25.913000 audit[5089]: USER_END pid=5089 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:25.913000 audit[5089]: CRED_DISP pid=5089 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:25.919141 systemd-logind[1576]: Session 22 logged out. Waiting for processes to exit. Dec 16 03:29:25.920116 systemd[1]: sshd@20-144.126.212.19:22-147.75.109.163:37270.service: Deactivated successfully. Dec 16 03:29:25.919000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-144.126.212.19:22-147.75.109.163:37270 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:29:25.923461 systemd[1]: session-22.scope: Deactivated successfully. Dec 16 03:29:25.928267 systemd-logind[1576]: Removed session 22. Dec 16 03:29:27.401658 kubelet[2782]: E1216 03:29:27.401527 2782 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:29:29.401998 kubelet[2782]: E1216 03:29:29.401838 2782 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:29:30.931720 systemd[1]: Started sshd@21-144.126.212.19:22-147.75.109.163:37274.service - OpenSSH per-connection server daemon (147.75.109.163:37274). Dec 16 03:29:30.939018 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 16 03:29:30.939194 kernel: audit: type=1130 audit(1765855770.932:868): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-144.126.212.19:22-147.75.109.163:37274 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:29:30.932000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-144.126.212.19:22-147.75.109.163:37274 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:29:31.015000 audit[5107]: USER_ACCT pid=5107 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:31.017399 sshd[5107]: Accepted publickey for core from 147.75.109.163 port 37274 ssh2: RSA SHA256:6kOFCGRVte/xmFQ810xFB2aQq7tPvT6sNyInYz0ISZM Dec 16 03:29:31.020353 sshd-session[5107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:29:31.022140 kernel: audit: type=1101 audit(1765855771.015:869): pid=5107 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:31.017000 audit[5107]: CRED_ACQ pid=5107 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:31.027131 kernel: audit: type=1103 audit(1765855771.017:870): pid=5107 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:31.031140 kernel: audit: type=1006 audit(1765855771.018:871): pid=5107 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Dec 16 03:29:31.018000 audit[5107]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffea9ab0160 a2=3 a3=0 items=0 ppid=1 pid=5107 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:31.036115 kernel: audit: type=1300 audit(1765855771.018:871): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffea9ab0160 a2=3 a3=0 items=0 ppid=1 pid=5107 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:31.018000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:29:31.042114 systemd-logind[1576]: New session 23 of user core. Dec 16 03:29:31.044240 kernel: audit: type=1327 audit(1765855771.018:871): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:29:31.045389 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 16 03:29:31.051000 audit[5107]: USER_START pid=5107 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:31.058183 kernel: audit: type=1105 audit(1765855771.051:872): pid=5107 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:31.057000 audit[5111]: CRED_ACQ pid=5111 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:31.064123 kernel: audit: type=1103 audit(1765855771.057:873): pid=5111 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:31.193209 sshd[5111]: Connection closed by 147.75.109.163 port 37274 Dec 16 03:29:31.194345 sshd-session[5107]: pam_unix(sshd:session): session closed for user core Dec 16 03:29:31.200000 audit[5107]: USER_END pid=5107 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:31.208130 kernel: audit: type=1106 audit(1765855771.200:874): pid=5107 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:31.208315 systemd[1]: sshd@21-144.126.212.19:22-147.75.109.163:37274.service: Deactivated successfully. Dec 16 03:29:31.211902 systemd[1]: session-23.scope: Deactivated successfully. Dec 16 03:29:31.201000 audit[5107]: CRED_DISP pid=5107 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:31.217128 kernel: audit: type=1104 audit(1765855771.201:875): pid=5107 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:31.220636 systemd-logind[1576]: Session 23 logged out. Waiting for processes to exit. Dec 16 03:29:31.207000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-144.126.212.19:22-147.75.109.163:37274 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:29:31.222431 systemd-logind[1576]: Removed session 23. Dec 16 03:29:31.411918 kubelet[2782]: E1216 03:29:31.411530 2782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-d6f5f89f8-mz87q" podUID="34973ba8-5ee2-4cc3-a8d0-65270e641be0" Dec 16 03:29:31.427863 kubelet[2782]: E1216 03:29:31.427809 2782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-699cff98b-rrdrk" podUID="49d300b0-388a-4cb6-b194-17855a6b768b" Dec 16 03:29:35.407285 kubelet[2782]: E1216 03:29:35.404447 2782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-mx7nf" podUID="cff3841d-c916-42d7-ba21-c96ff077d2f0" Dec 16 03:29:35.407285 kubelet[2782]: E1216 03:29:35.404563 2782 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hrg2x" podUID="8744312e-a06c-4ec6-97fa-99683d819e93" Dec 16 03:29:35.408633 kubelet[2782]: E1216 03:29:35.408498 2782 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64588b6f98-lngtn" podUID="c131b783-1bf8-4038-b6d0-3485a4a710ea" Dec 16 03:29:35.420837 kubelet[2782]: E1216 03:29:35.420689 2782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-699cff98b-z4b6s" podUID="fb012d08-2ffd-46d4-bf1b-e4743471acfd" Dec 16 03:29:36.215597 systemd[1]: Started sshd@22-144.126.212.19:22-147.75.109.163:55716.service - OpenSSH per-connection server daemon (147.75.109.163:55716). Dec 16 03:29:36.223441 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 03:29:36.223513 kernel: audit: type=1130 audit(1765855776.214:877): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-144.126.212.19:22-147.75.109.163:55716 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:29:36.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-144.126.212.19:22-147.75.109.163:55716 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:29:36.409000 audit[5146]: USER_ACCT pid=5146 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:36.415206 kernel: audit: type=1101 audit(1765855776.409:878): pid=5146 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:36.415403 sshd[5146]: Accepted publickey for core from 147.75.109.163 port 55716 ssh2: RSA SHA256:6kOFCGRVte/xmFQ810xFB2aQq7tPvT6sNyInYz0ISZM Dec 16 03:29:36.416000 audit[5146]: CRED_ACQ pid=5146 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:36.423885 kernel: audit: type=1103 audit(1765855776.416:879): pid=5146 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:36.425634 sshd-session[5146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:29:36.431126 kernel: audit: type=1006 audit(1765855776.421:880): pid=5146 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Dec 16 03:29:36.421000 audit[5146]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe418908e0 a2=3 a3=0 items=0 ppid=1 pid=5146 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:36.437135 kernel: audit: type=1300 audit(1765855776.421:880): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe418908e0 a2=3 a3=0 items=0 ppid=1 pid=5146 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:36.421000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:29:36.440151 kernel: audit: type=1327 audit(1765855776.421:880): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:29:36.447600 systemd-logind[1576]: New session 24 of user core. Dec 16 03:29:36.456722 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 16 03:29:36.460000 audit[5146]: USER_START pid=5146 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:36.467123 kernel: audit: type=1105 audit(1765855776.460:881): pid=5146 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:36.470000 audit[5150]: CRED_ACQ pid=5150 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:36.476141 kernel: audit: type=1103 audit(1765855776.470:882): pid=5150 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:36.897738 sshd[5150]: Connection closed by 147.75.109.163 port 55716 Dec 16 03:29:36.898735 sshd-session[5146]: pam_unix(sshd:session): session closed for user core Dec 16 03:29:36.903000 audit[5146]: USER_END pid=5146 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:36.912144 kernel: audit: type=1106 audit(1765855776.903:883): pid=5146 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:36.913026 systemd[1]: sshd@22-144.126.212.19:22-147.75.109.163:55716.service: Deactivated successfully. Dec 16 03:29:36.903000 audit[5146]: CRED_DISP pid=5146 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:36.920130 kernel: audit: type=1104 audit(1765855776.903:884): pid=5146 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:36.921179 systemd[1]: session-24.scope: Deactivated successfully. Dec 16 03:29:36.912000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-144.126.212.19:22-147.75.109.163:55716 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:29:36.924731 systemd-logind[1576]: Session 24 logged out. Waiting for processes to exit. Dec 16 03:29:36.928629 systemd-logind[1576]: Removed session 24.