Jan 24 00:49:42.173460 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 23 21:38:55 -00 2026 Jan 24 00:49:42.173486 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=ccc6714d5701627f00a0daea097f593263f2ea87c850869ae25db66d36e22877 Jan 24 00:49:42.173495 kernel: BIOS-provided physical RAM map: Jan 24 00:49:42.173502 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Jan 24 00:49:42.173508 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Jan 24 00:49:42.173514 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 24 00:49:42.173523 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Jan 24 00:49:42.173530 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Jan 24 00:49:42.173536 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 24 00:49:42.173542 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 24 00:49:42.173549 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 24 00:49:42.173556 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 24 00:49:42.173562 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Jan 24 00:49:42.173568 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 24 00:49:42.173578 kernel: NX (Execute Disable) protection: active Jan 24 00:49:42.173585 kernel: APIC: Static calls initialized Jan 24 00:49:42.173591 kernel: SMBIOS 2.8 present. Jan 24 00:49:42.173598 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Jan 24 00:49:42.173605 kernel: DMI: Memory slots populated: 1/1 Jan 24 00:49:42.173613 kernel: Hypervisor detected: KVM Jan 24 00:49:42.173620 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Jan 24 00:49:42.173626 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 24 00:49:42.173633 kernel: kvm-clock: using sched offset of 6120301762 cycles Jan 24 00:49:42.173640 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 24 00:49:42.173647 kernel: tsc: Detected 1999.999 MHz processor Jan 24 00:49:42.173654 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 24 00:49:42.173662 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 24 00:49:42.173669 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Jan 24 00:49:42.173678 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 24 00:49:42.173685 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 24 00:49:42.173692 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Jan 24 00:49:42.173699 kernel: Using GB pages for direct mapping Jan 24 00:49:42.173706 kernel: ACPI: Early table checksum verification disabled Jan 24 00:49:42.173713 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Jan 24 00:49:42.173720 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:49:42.173729 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:49:42.173736 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:49:42.173743 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jan 24 00:49:42.173750 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:49:42.173757 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:49:42.173767 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:49:42.173777 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:49:42.173784 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Jan 24 00:49:42.173791 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Jan 24 00:49:42.173799 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jan 24 00:49:42.173806 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Jan 24 00:49:42.173815 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Jan 24 00:49:42.173822 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Jan 24 00:49:42.173829 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Jan 24 00:49:42.173836 kernel: No NUMA configuration found Jan 24 00:49:42.173844 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Jan 24 00:49:42.173851 kernel: NODE_DATA(0) allocated [mem 0x17fff8dc0-0x17fffffff] Jan 24 00:49:42.173858 kernel: Zone ranges: Jan 24 00:49:42.173865 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 24 00:49:42.173875 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 24 00:49:42.173896 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Jan 24 00:49:42.173903 kernel: Device empty Jan 24 00:49:42.173910 kernel: Movable zone start for each node Jan 24 00:49:42.173917 kernel: Early memory node ranges Jan 24 00:49:42.173924 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 24 00:49:42.173933 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Jan 24 00:49:42.173949 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Jan 24 00:49:42.173961 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Jan 24 00:49:42.173973 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 24 00:49:42.173985 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 24 00:49:42.173997 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Jan 24 00:49:42.174008 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 24 00:49:42.174021 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 24 00:49:42.174034 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 24 00:49:42.174051 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 24 00:49:42.174063 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 24 00:49:42.174076 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 24 00:49:42.174089 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 24 00:49:42.174098 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 24 00:49:42.174106 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 24 00:49:42.174118 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 24 00:49:42.174134 kernel: TSC deadline timer available Jan 24 00:49:42.174144 kernel: CPU topo: Max. logical packages: 1 Jan 24 00:49:42.174151 kernel: CPU topo: Max. logical dies: 1 Jan 24 00:49:42.174158 kernel: CPU topo: Max. dies per package: 1 Jan 24 00:49:42.174166 kernel: CPU topo: Max. threads per core: 1 Jan 24 00:49:42.174173 kernel: CPU topo: Num. cores per package: 2 Jan 24 00:49:42.174180 kernel: CPU topo: Num. threads per package: 2 Jan 24 00:49:42.174187 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jan 24 00:49:42.174197 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 24 00:49:42.174204 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 24 00:49:42.174211 kernel: kvm-guest: setup PV sched yield Jan 24 00:49:42.174218 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 24 00:49:42.174226 kernel: Booting paravirtualized kernel on KVM Jan 24 00:49:42.174233 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 24 00:49:42.174240 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 24 00:49:42.174250 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jan 24 00:49:42.174257 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jan 24 00:49:42.174264 kernel: pcpu-alloc: [0] 0 1 Jan 24 00:49:42.174271 kernel: kvm-guest: PV spinlocks enabled Jan 24 00:49:42.174278 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 24 00:49:42.174287 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=ccc6714d5701627f00a0daea097f593263f2ea87c850869ae25db66d36e22877 Jan 24 00:49:42.174296 kernel: random: crng init done Jan 24 00:49:42.174304 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 24 00:49:42.174311 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 24 00:49:42.174318 kernel: Fallback order for Node 0: 0 Jan 24 00:49:42.174325 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 Jan 24 00:49:42.174333 kernel: Policy zone: Normal Jan 24 00:49:42.174340 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 24 00:49:42.174347 kernel: software IO TLB: area num 2. Jan 24 00:49:42.174356 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 24 00:49:42.174364 kernel: ftrace: allocating 40128 entries in 157 pages Jan 24 00:49:42.174387 kernel: ftrace: allocated 157 pages with 5 groups Jan 24 00:49:42.174399 kernel: Dynamic Preempt: voluntary Jan 24 00:49:42.174411 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 24 00:49:42.174425 kernel: rcu: RCU event tracing is enabled. Jan 24 00:49:42.174437 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 24 00:49:42.174454 kernel: Trampoline variant of Tasks RCU enabled. Jan 24 00:49:42.174466 kernel: Rude variant of Tasks RCU enabled. Jan 24 00:49:42.174478 kernel: Tracing variant of Tasks RCU enabled. Jan 24 00:49:42.174489 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 24 00:49:42.174501 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 24 00:49:42.174515 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 24 00:49:42.174541 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 24 00:49:42.174554 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 24 00:49:42.174567 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 24 00:49:42.174580 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 24 00:49:42.174598 kernel: Console: colour VGA+ 80x25 Jan 24 00:49:42.174608 kernel: printk: legacy console [tty0] enabled Jan 24 00:49:42.174621 kernel: printk: legacy console [ttyS0] enabled Jan 24 00:49:42.174634 kernel: ACPI: Core revision 20240827 Jan 24 00:49:42.174652 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 24 00:49:42.174665 kernel: APIC: Switch to symmetric I/O mode setup Jan 24 00:49:42.174676 kernel: x2apic enabled Jan 24 00:49:42.174684 kernel: APIC: Switched APIC routing to: physical x2apic Jan 24 00:49:42.174692 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 24 00:49:42.174701 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 24 00:49:42.174714 kernel: kvm-guest: setup PV IPIs Jan 24 00:49:42.174732 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 24 00:49:42.174744 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a85afc727, max_idle_ns: 881590685098 ns Jan 24 00:49:42.174757 kernel: Calibrating delay loop (skipped) preset value.. 3999.99 BogoMIPS (lpj=1999999) Jan 24 00:49:42.174770 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 24 00:49:42.174783 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 24 00:49:42.174796 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 24 00:49:42.174808 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 24 00:49:42.174825 kernel: Spectre V2 : Mitigation: Retpolines Jan 24 00:49:42.174838 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 24 00:49:42.174850 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 24 00:49:42.174862 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 24 00:49:42.174873 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 24 00:49:42.174908 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 24 00:49:42.174924 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 24 00:49:42.174943 kernel: active return thunk: srso_alias_return_thunk Jan 24 00:49:42.174956 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 24 00:49:42.174969 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 24 00:49:42.174982 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 24 00:49:42.174995 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 24 00:49:42.175009 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 24 00:49:42.175025 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 24 00:49:42.175038 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 24 00:49:42.175050 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 24 00:49:42.175063 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Jan 24 00:49:42.175076 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Jan 24 00:49:42.175089 kernel: Freeing SMP alternatives memory: 32K Jan 24 00:49:42.175101 kernel: pid_max: default: 32768 minimum: 301 Jan 24 00:49:42.175117 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 24 00:49:42.175129 kernel: landlock: Up and running. Jan 24 00:49:42.175142 kernel: SELinux: Initializing. Jan 24 00:49:42.175154 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 24 00:49:42.175167 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 24 00:49:42.175179 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 24 00:49:42.175192 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 24 00:49:42.175207 kernel: ... version: 0 Jan 24 00:49:42.175219 kernel: ... bit width: 48 Jan 24 00:49:42.175232 kernel: ... generic registers: 6 Jan 24 00:49:42.175244 kernel: ... value mask: 0000ffffffffffff Jan 24 00:49:42.175257 kernel: ... max period: 00007fffffffffff Jan 24 00:49:42.175269 kernel: ... fixed-purpose events: 0 Jan 24 00:49:42.175281 kernel: ... event mask: 000000000000003f Jan 24 00:49:42.175294 kernel: signal: max sigframe size: 3376 Jan 24 00:49:42.175310 kernel: rcu: Hierarchical SRCU implementation. Jan 24 00:49:42.175324 kernel: rcu: Max phase no-delay instances is 400. Jan 24 00:49:42.175336 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 24 00:49:42.175349 kernel: smp: Bringing up secondary CPUs ... Jan 24 00:49:42.175362 kernel: smpboot: x86: Booting SMP configuration: Jan 24 00:49:42.175375 kernel: .... node #0, CPUs: #1 Jan 24 00:49:42.175388 kernel: smp: Brought up 1 node, 2 CPUs Jan 24 00:49:42.175404 kernel: smpboot: Total of 2 processors activated (7999.99 BogoMIPS) Jan 24 00:49:42.175417 kernel: Memory: 3978192K/4193772K available (14336K kernel code, 2445K rwdata, 31644K rodata, 15540K init, 2496K bss, 210904K reserved, 0K cma-reserved) Jan 24 00:49:42.175430 kernel: devtmpfs: initialized Jan 24 00:49:42.175443 kernel: x86/mm: Memory block size: 128MB Jan 24 00:49:42.175457 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 24 00:49:42.175469 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 24 00:49:42.175484 kernel: pinctrl core: initialized pinctrl subsystem Jan 24 00:49:42.175500 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 24 00:49:42.175512 kernel: audit: initializing netlink subsys (disabled) Jan 24 00:49:42.175524 kernel: audit: type=2000 audit(1769215779.112:1): state=initialized audit_enabled=0 res=1 Jan 24 00:49:42.175536 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 24 00:49:42.175548 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 24 00:49:42.175561 kernel: cpuidle: using governor menu Jan 24 00:49:42.175574 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 24 00:49:42.175591 kernel: dca service started, version 1.12.1 Jan 24 00:49:42.175603 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jan 24 00:49:42.175616 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 24 00:49:42.175628 kernel: PCI: Using configuration type 1 for base access Jan 24 00:49:42.175641 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 24 00:49:42.175653 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 24 00:49:42.175664 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 24 00:49:42.175674 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 24 00:49:42.175682 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 24 00:49:42.175689 kernel: ACPI: Added _OSI(Module Device) Jan 24 00:49:42.175697 kernel: ACPI: Added _OSI(Processor Device) Jan 24 00:49:42.175705 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 24 00:49:42.175712 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 24 00:49:42.175720 kernel: ACPI: Interpreter enabled Jan 24 00:49:42.175729 kernel: ACPI: PM: (supports S0 S3 S5) Jan 24 00:49:42.175737 kernel: ACPI: Using IOAPIC for interrupt routing Jan 24 00:49:42.175745 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 24 00:49:42.175752 kernel: PCI: Using E820 reservations for host bridge windows Jan 24 00:49:42.175760 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 24 00:49:42.175767 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 24 00:49:42.176064 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 24 00:49:42.176262 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 24 00:49:42.176446 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 24 00:49:42.176457 kernel: PCI host bridge to bus 0000:00 Jan 24 00:49:42.176634 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 24 00:49:42.176798 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 24 00:49:42.176985 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 24 00:49:42.177149 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jan 24 00:49:42.177331 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 24 00:49:42.177557 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Jan 24 00:49:42.177725 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 24 00:49:42.177940 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 24 00:49:42.178156 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 24 00:49:42.178335 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jan 24 00:49:42.178509 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jan 24 00:49:42.178687 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jan 24 00:49:42.178858 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 24 00:49:42.179060 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Jan 24 00:49:42.179244 kernel: pci 0000:00:02.0: BAR 0 [io 0xc000-0xc03f] Jan 24 00:49:42.179417 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jan 24 00:49:42.179591 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jan 24 00:49:42.179796 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 24 00:49:42.181697 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Jan 24 00:49:42.181930 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jan 24 00:49:42.182705 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jan 24 00:49:42.182898 kernel: pci 0000:00:03.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jan 24 00:49:42.183089 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 24 00:49:42.183265 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 24 00:49:42.183451 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 24 00:49:42.183631 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0c0-0xc0df] Jan 24 00:49:42.183841 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd3000-0xfebd3fff] Jan 24 00:49:42.184121 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 24 00:49:42.184348 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jan 24 00:49:42.184362 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 24 00:49:42.184375 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 24 00:49:42.184382 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 24 00:49:42.184390 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 24 00:49:42.184398 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 24 00:49:42.184406 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 24 00:49:42.184413 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 24 00:49:42.184421 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 24 00:49:42.184431 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 24 00:49:42.184438 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 24 00:49:42.184445 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 24 00:49:42.184453 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 24 00:49:42.184461 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 24 00:49:42.184473 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 24 00:49:42.184486 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 24 00:49:42.184498 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 24 00:49:42.184514 kernel: iommu: Default domain type: Translated Jan 24 00:49:42.184527 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 24 00:49:42.184540 kernel: PCI: Using ACPI for IRQ routing Jan 24 00:49:42.184553 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 24 00:49:42.184562 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Jan 24 00:49:42.184570 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Jan 24 00:49:42.184755 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 24 00:49:42.184951 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 24 00:49:42.185127 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 24 00:49:42.185137 kernel: vgaarb: loaded Jan 24 00:49:42.185145 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 24 00:49:42.185153 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 24 00:49:42.185160 kernel: clocksource: Switched to clocksource kvm-clock Jan 24 00:49:42.185172 kernel: VFS: Disk quotas dquot_6.6.0 Jan 24 00:49:42.185180 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 24 00:49:42.185187 kernel: pnp: PnP ACPI init Jan 24 00:49:42.185376 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 24 00:49:42.185389 kernel: pnp: PnP ACPI: found 5 devices Jan 24 00:49:42.185396 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 24 00:49:42.185404 kernel: NET: Registered PF_INET protocol family Jan 24 00:49:42.185415 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 24 00:49:42.185423 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 24 00:49:42.185430 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 24 00:49:42.185438 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 24 00:49:42.185446 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 24 00:49:42.185453 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 24 00:49:42.185461 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 24 00:49:42.185471 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 24 00:49:42.185478 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 24 00:49:42.185486 kernel: NET: Registered PF_XDP protocol family Jan 24 00:49:42.185651 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 24 00:49:42.185813 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 24 00:49:42.186020 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 24 00:49:42.186183 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jan 24 00:49:42.186349 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 24 00:49:42.186508 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Jan 24 00:49:42.186519 kernel: PCI: CLS 0 bytes, default 64 Jan 24 00:49:42.186526 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 24 00:49:42.186534 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Jan 24 00:49:42.186542 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a85afc727, max_idle_ns: 881590685098 ns Jan 24 00:49:42.186553 kernel: Initialise system trusted keyrings Jan 24 00:49:42.186560 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 24 00:49:42.186568 kernel: Key type asymmetric registered Jan 24 00:49:42.186575 kernel: Asymmetric key parser 'x509' registered Jan 24 00:49:42.186583 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 24 00:49:42.186591 kernel: io scheduler mq-deadline registered Jan 24 00:49:42.186598 kernel: io scheduler kyber registered Jan 24 00:49:42.186605 kernel: io scheduler bfq registered Jan 24 00:49:42.186615 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 24 00:49:42.186623 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 24 00:49:42.186631 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 24 00:49:42.186639 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 24 00:49:42.186646 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 24 00:49:42.186654 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 24 00:49:42.186661 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 24 00:49:42.186671 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 24 00:49:42.186679 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 24 00:49:42.186906 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 24 00:49:42.187086 kernel: rtc_cmos 00:03: registered as rtc0 Jan 24 00:49:42.187256 kernel: rtc_cmos 00:03: setting system clock to 2026-01-24T00:49:40 UTC (1769215780) Jan 24 00:49:42.187448 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 24 00:49:42.187466 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 24 00:49:42.187474 kernel: NET: Registered PF_INET6 protocol family Jan 24 00:49:42.187481 kernel: Segment Routing with IPv6 Jan 24 00:49:42.187489 kernel: In-situ OAM (IOAM) with IPv6 Jan 24 00:49:42.187497 kernel: NET: Registered PF_PACKET protocol family Jan 24 00:49:42.187504 kernel: Key type dns_resolver registered Jan 24 00:49:42.187512 kernel: IPI shorthand broadcast: enabled Jan 24 00:49:42.187522 kernel: sched_clock: Marking stable (1793005073, 337625815)->(2232617525, -101986637) Jan 24 00:49:42.187529 kernel: registered taskstats version 1 Jan 24 00:49:42.187537 kernel: Loading compiled-in X.509 certificates Jan 24 00:49:42.187546 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 08600fac738f210e3b32f727339edfe2b1af2e3d' Jan 24 00:49:42.187553 kernel: Demotion targets for Node 0: null Jan 24 00:49:42.187561 kernel: Key type .fscrypt registered Jan 24 00:49:42.187568 kernel: Key type fscrypt-provisioning registered Jan 24 00:49:42.187578 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 24 00:49:42.187586 kernel: ima: Allocated hash algorithm: sha1 Jan 24 00:49:42.187593 kernel: ima: No architecture policies found Jan 24 00:49:42.187614 kernel: clk: Disabling unused clocks Jan 24 00:49:42.187622 kernel: Freeing unused kernel image (initmem) memory: 15540K Jan 24 00:49:42.187630 kernel: Write protecting the kernel read-only data: 47104k Jan 24 00:49:42.187637 kernel: Freeing unused kernel image (rodata/data gap) memory: 1124K Jan 24 00:49:42.187647 kernel: Run /init as init process Jan 24 00:49:42.187655 kernel: with arguments: Jan 24 00:49:42.187662 kernel: /init Jan 24 00:49:42.187670 kernel: with environment: Jan 24 00:49:42.187678 kernel: HOME=/ Jan 24 00:49:42.187703 kernel: TERM=linux Jan 24 00:49:42.187714 kernel: SCSI subsystem initialized Jan 24 00:49:42.187724 kernel: libata version 3.00 loaded. Jan 24 00:49:42.187949 kernel: ahci 0000:00:1f.2: version 3.0 Jan 24 00:49:42.187964 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 24 00:49:42.188140 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 24 00:49:42.188316 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 24 00:49:42.188491 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 24 00:49:42.188689 kernel: scsi host0: ahci Jan 24 00:49:42.188896 kernel: scsi host1: ahci Jan 24 00:49:42.189104 kernel: scsi host2: ahci Jan 24 00:49:42.189292 kernel: scsi host3: ahci Jan 24 00:49:42.189480 kernel: scsi host4: ahci Jan 24 00:49:42.189669 kernel: scsi host5: ahci Jan 24 00:49:42.189685 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 24 lpm-pol 1 Jan 24 00:49:42.189694 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 24 lpm-pol 1 Jan 24 00:49:42.189702 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 24 lpm-pol 1 Jan 24 00:49:42.189710 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 24 lpm-pol 1 Jan 24 00:49:42.189718 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 24 lpm-pol 1 Jan 24 00:49:42.189725 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 24 lpm-pol 1 Jan 24 00:49:42.189736 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 24 00:49:42.189744 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 24 00:49:42.189752 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 24 00:49:42.189760 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 24 00:49:42.189768 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 24 00:49:42.189776 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 24 00:49:42.190026 kernel: virtio_scsi virtio0: 2/0/0 default/read/poll queues Jan 24 00:49:42.190226 kernel: scsi host6: Virtio SCSI HBA Jan 24 00:49:42.190488 kernel: scsi 6:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jan 24 00:49:42.190697 kernel: sd 6:0:0:0: Power-on or device reset occurred Jan 24 00:49:42.190919 kernel: sd 6:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Jan 24 00:49:42.191120 kernel: sd 6:0:0:0: [sda] Write Protect is off Jan 24 00:49:42.191312 kernel: sd 6:0:0:0: [sda] Mode Sense: 63 00 00 08 Jan 24 00:49:42.191510 kernel: sd 6:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 24 00:49:42.191521 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 24 00:49:42.191529 kernel: GPT:25804799 != 167739391 Jan 24 00:49:42.191537 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 24 00:49:42.191545 kernel: GPT:25804799 != 167739391 Jan 24 00:49:42.191553 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 24 00:49:42.191565 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 24 00:49:42.191764 kernel: sd 6:0:0:0: [sda] Attached SCSI disk Jan 24 00:49:42.191785 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 24 00:49:42.191794 kernel: device-mapper: uevent: version 1.0.3 Jan 24 00:49:42.191803 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 24 00:49:42.191811 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 24 00:49:42.191819 kernel: raid6: avx2x4 gen() 33624 MB/s Jan 24 00:49:42.191831 kernel: raid6: avx2x2 gen() 34458 MB/s Jan 24 00:49:42.191839 kernel: raid6: avx2x1 gen() 25489 MB/s Jan 24 00:49:42.191847 kernel: raid6: using algorithm avx2x2 gen() 34458 MB/s Jan 24 00:49:42.191855 kernel: raid6: .... xor() 28604 MB/s, rmw enabled Jan 24 00:49:42.191865 kernel: raid6: using avx2x2 recovery algorithm Jan 24 00:49:42.191873 kernel: xor: automatically using best checksumming function avx Jan 24 00:49:42.191908 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 24 00:49:42.191918 kernel: BTRFS: device fsid 091bfa4a-922a-4e6e-abc1-a4b74083975f devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (167) Jan 24 00:49:42.191926 kernel: BTRFS info (device dm-0): first mount of filesystem 091bfa4a-922a-4e6e-abc1-a4b74083975f Jan 24 00:49:42.191934 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:49:42.191942 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 24 00:49:42.191954 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 24 00:49:42.191962 kernel: BTRFS info (device dm-0): enabling free space tree Jan 24 00:49:42.191970 kernel: loop: module loaded Jan 24 00:49:42.191978 kernel: loop0: detected capacity change from 0 to 100560 Jan 24 00:49:42.191986 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 24 00:49:42.191998 systemd[1]: Successfully made /usr/ read-only. Jan 24 00:49:42.192009 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 24 00:49:42.192020 systemd[1]: Detected virtualization kvm. Jan 24 00:49:42.192028 systemd[1]: Detected architecture x86-64. Jan 24 00:49:42.192037 systemd[1]: Running in initrd. Jan 24 00:49:42.192045 systemd[1]: No hostname configured, using default hostname. Jan 24 00:49:42.192054 systemd[1]: Hostname set to . Jan 24 00:49:42.192062 systemd[1]: Initializing machine ID from random generator. Jan 24 00:49:42.192072 systemd[1]: Queued start job for default target initrd.target. Jan 24 00:49:42.192081 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 24 00:49:42.192089 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:49:42.192098 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:49:42.192107 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 24 00:49:42.192116 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 00:49:42.192127 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 24 00:49:42.192136 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 24 00:49:42.192144 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:49:42.192153 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:49:42.192161 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 24 00:49:42.192170 systemd[1]: Reached target paths.target - Path Units. Jan 24 00:49:42.192181 systemd[1]: Reached target slices.target - Slice Units. Jan 24 00:49:42.192189 systemd[1]: Reached target swap.target - Swaps. Jan 24 00:49:42.192197 systemd[1]: Reached target timers.target - Timer Units. Jan 24 00:49:42.192206 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 00:49:42.192214 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 00:49:42.192223 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 24 00:49:42.192231 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 24 00:49:42.192242 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 24 00:49:42.192250 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:49:42.192259 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 00:49:42.192267 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:49:42.192276 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 00:49:42.192284 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 24 00:49:42.192293 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 24 00:49:42.192303 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 00:49:42.192312 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 24 00:49:42.192321 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 24 00:49:42.192329 systemd[1]: Starting systemd-fsck-usr.service... Jan 24 00:49:42.192338 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 00:49:42.192346 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 00:49:42.192357 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:49:42.192366 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 24 00:49:42.192374 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:49:42.192407 systemd-journald[304]: Collecting audit messages is enabled. Jan 24 00:49:42.192433 systemd[1]: Finished systemd-fsck-usr.service. Jan 24 00:49:42.192449 kernel: audit: type=1130 audit(1769215782.169:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:42.192464 kernel: audit: type=1130 audit(1769215782.179:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:42.192481 systemd-journald[304]: Journal started Jan 24 00:49:42.192508 systemd-journald[304]: Runtime Journal (/run/log/journal/3199f3708f0b4637a3c071645fb1cacc) is 8M, max 78.1M, 70.1M free. Jan 24 00:49:42.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:42.179000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:42.202915 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 24 00:49:42.219915 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 24 00:49:42.223908 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 00:49:42.223940 kernel: Bridge firewalling registered Jan 24 00:49:42.224909 systemd-modules-load[305]: Inserted module 'br_netfilter' Jan 24 00:49:42.318345 kernel: audit: type=1130 audit(1769215782.309:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:42.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:42.312014 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 00:49:42.329242 kernel: audit: type=1130 audit(1769215782.318:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:42.318000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:42.319510 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:49:42.337767 kernel: audit: type=1130 audit(1769215782.329:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:42.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:42.331603 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 00:49:42.347149 kernel: audit: type=1130 audit(1769215782.337:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:42.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:42.343017 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:49:42.358492 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:49:42.360744 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 00:49:42.367073 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 00:49:42.377000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:42.377999 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:49:42.387284 kernel: audit: type=1130 audit(1769215782.377:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:42.387437 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 00:49:42.396223 kernel: audit: type=1334 audit(1769215782.385:9): prog-id=6 op=LOAD Jan 24 00:49:42.385000 audit: BPF prog-id=6 op=LOAD Jan 24 00:49:42.393415 systemd-tmpfiles[324]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 24 00:49:42.406457 kernel: audit: type=1130 audit(1769215782.396:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:42.396000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:42.396402 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:49:42.407000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:42.398373 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:49:42.409976 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:49:42.409000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:42.414022 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 24 00:49:42.439934 dracut-cmdline[343]: dracut-109 Jan 24 00:49:42.444762 dracut-cmdline[343]: Using kernel command line parameters: SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=ccc6714d5701627f00a0daea097f593263f2ea87c850869ae25db66d36e22877 Jan 24 00:49:42.450238 systemd-resolved[337]: Positive Trust Anchors: Jan 24 00:49:42.451314 systemd-resolved[337]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 00:49:42.452407 systemd-resolved[337]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 24 00:49:42.452436 systemd-resolved[337]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 00:49:42.479634 systemd-resolved[337]: Defaulting to hostname 'linux'. Jan 24 00:49:42.480613 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 00:49:42.481000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:42.482680 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:49:42.546917 kernel: Loading iSCSI transport class v2.0-870. Jan 24 00:49:42.560933 kernel: iscsi: registered transport (tcp) Jan 24 00:49:42.604809 kernel: iscsi: registered transport (qla4xxx) Jan 24 00:49:42.604857 kernel: QLogic iSCSI HBA Driver Jan 24 00:49:42.633311 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 24 00:49:42.649125 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 24 00:49:42.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:42.650563 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 24 00:49:42.696791 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 24 00:49:42.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:42.699694 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 24 00:49:42.704009 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 24 00:49:42.733592 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 24 00:49:42.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:42.735000 audit: BPF prog-id=7 op=LOAD Jan 24 00:49:42.735000 audit: BPF prog-id=8 op=LOAD Jan 24 00:49:42.738042 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:49:42.767075 systemd-udevd[576]: Using default interface naming scheme 'v257'. Jan 24 00:49:42.779677 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:49:42.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:42.783011 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 24 00:49:42.810501 dracut-pre-trigger[645]: rd.md=0: removing MD RAID activation Jan 24 00:49:42.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:42.815000 audit: BPF prog-id=9 op=LOAD Jan 24 00:49:42.814991 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 00:49:42.818734 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 00:49:42.842616 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 00:49:42.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:42.846113 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 00:49:42.867173 systemd-networkd[686]: lo: Link UP Jan 24 00:49:42.867981 systemd-networkd[686]: lo: Gained carrier Jan 24 00:49:42.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:42.868493 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 00:49:42.869266 systemd[1]: Reached target network.target - Network. Jan 24 00:49:42.944754 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:49:42.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:42.949399 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 24 00:49:43.084370 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jan 24 00:49:43.093992 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 24 00:49:43.266135 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jan 24 00:49:43.283923 kernel: cryptd: max_cpu_qlen set to 1000 Jan 24 00:49:43.300215 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:49:43.354995 kernel: AES CTR mode by8 optimization enabled Jan 24 00:49:43.336000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:43.325575 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:49:43.337616 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:49:43.337643 systemd-networkd[686]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 24 00:49:43.337650 systemd-networkd[686]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:49:43.339183 systemd-networkd[686]: eth0: Link UP Jan 24 00:49:43.340047 systemd-networkd[686]: eth0: Gained carrier Jan 24 00:49:43.340067 systemd-networkd[686]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 24 00:49:43.341076 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:49:43.380160 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 24 00:49:43.390670 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jan 24 00:49:43.475597 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:49:43.475000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:43.482380 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 24 00:49:43.490997 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 24 00:49:43.490000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:43.492809 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 00:49:43.494791 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:49:43.495528 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 00:49:43.501453 disk-uuid[820]: Primary Header is updated. Jan 24 00:49:43.501453 disk-uuid[820]: Secondary Entries is updated. Jan 24 00:49:43.501453 disk-uuid[820]: Secondary Header is updated. Jan 24 00:49:43.503022 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 24 00:49:43.577753 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 24 00:49:43.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:44.185987 systemd-networkd[686]: eth0: DHCPv4 address 172.239.50.209/24, gateway 172.239.50.1 acquired from 23.205.167.250 Jan 24 00:49:44.573826 disk-uuid[821]: Warning: The kernel is still using the old partition table. Jan 24 00:49:44.573826 disk-uuid[821]: The new table will be used at the next reboot or after you Jan 24 00:49:44.573826 disk-uuid[821]: run partprobe(8) or kpartx(8) Jan 24 00:49:44.573826 disk-uuid[821]: The operation has completed successfully. Jan 24 00:49:44.582991 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 24 00:49:44.600195 kernel: kauditd_printk_skb: 18 callbacks suppressed Jan 24 00:49:44.600217 kernel: audit: type=1130 audit(1769215784.582:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:44.600231 kernel: audit: type=1131 audit(1769215784.583:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:44.582000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:44.583000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:44.583132 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 24 00:49:44.584963 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 24 00:49:44.629970 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (842) Jan 24 00:49:44.634221 kernel: BTRFS info (device sda6): first mount of filesystem 98bf19d7-5744-4291-8d20-e6403ff726cc Jan 24 00:49:44.634262 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:49:44.643511 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 24 00:49:44.643544 kernel: BTRFS info (device sda6): turning on async discard Jan 24 00:49:44.643569 kernel: BTRFS info (device sda6): enabling free space tree Jan 24 00:49:44.654905 kernel: BTRFS info (device sda6): last unmount of filesystem 98bf19d7-5744-4291-8d20-e6403ff726cc Jan 24 00:49:44.655293 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 24 00:49:44.665131 kernel: audit: type=1130 audit(1769215784.655:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:44.655000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:44.659070 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 24 00:49:44.789844 ignition[861]: Ignition 2.24.0 Jan 24 00:49:44.789860 ignition[861]: Stage: fetch-offline Jan 24 00:49:44.789919 ignition[861]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:49:44.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:44.792492 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 00:49:44.802824 kernel: audit: type=1130 audit(1769215784.792:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:44.789932 ignition[861]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 24 00:49:44.796071 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 24 00:49:44.790003 ignition[861]: parsed url from cmdline: "" Jan 24 00:49:44.790007 ignition[861]: no config URL provided Jan 24 00:49:44.790012 ignition[861]: reading system config file "/usr/lib/ignition/user.ign" Jan 24 00:49:44.790023 ignition[861]: no config at "/usr/lib/ignition/user.ign" Jan 24 00:49:44.790035 ignition[861]: failed to fetch config: resource requires networking Jan 24 00:49:44.790229 ignition[861]: Ignition finished successfully Jan 24 00:49:44.820748 ignition[868]: Ignition 2.24.0 Jan 24 00:49:44.820755 ignition[868]: Stage: fetch Jan 24 00:49:44.820908 ignition[868]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:49:44.820919 ignition[868]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 24 00:49:44.821008 ignition[868]: parsed url from cmdline: "" Jan 24 00:49:44.821012 ignition[868]: no config URL provided Jan 24 00:49:44.821020 ignition[868]: reading system config file "/usr/lib/ignition/user.ign" Jan 24 00:49:44.821028 ignition[868]: no config at "/usr/lib/ignition/user.ign" Jan 24 00:49:44.821063 ignition[868]: PUT http://169.254.169.254/v1/token: attempt #1 Jan 24 00:49:44.927596 ignition[868]: PUT result: OK Jan 24 00:49:44.927661 ignition[868]: GET http://169.254.169.254/v1/user-data: attempt #1 Jan 24 00:49:44.956047 systemd-networkd[686]: eth0: Gained IPv6LL Jan 24 00:49:45.041255 ignition[868]: GET result: OK Jan 24 00:49:45.041392 ignition[868]: parsing config with SHA512: 98873cd4adf4d0db5b5227d66242c2089ae86ec6f317048411898904da0e326acad92d87cc0d2eb8857b31cb43b44f93cfa67062a64070d25f0f19db726f1926 Jan 24 00:49:45.047193 unknown[868]: fetched base config from "system" Jan 24 00:49:45.047972 unknown[868]: fetched base config from "system" Jan 24 00:49:45.048469 ignition[868]: fetch: fetch complete Jan 24 00:49:45.047983 unknown[868]: fetched user config from "akamai" Jan 24 00:49:45.048475 ignition[868]: fetch: fetch passed Jan 24 00:49:45.051624 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 24 00:49:45.061462 kernel: audit: type=1130 audit(1769215785.051:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:45.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:45.048516 ignition[868]: Ignition finished successfully Jan 24 00:49:45.055015 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 24 00:49:45.086261 ignition[874]: Ignition 2.24.0 Jan 24 00:49:45.086276 ignition[874]: Stage: kargs Jan 24 00:49:45.086410 ignition[874]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:49:45.086420 ignition[874]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 24 00:49:45.097441 kernel: audit: type=1130 audit(1769215785.088:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:45.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:45.088840 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 24 00:49:45.087142 ignition[874]: kargs: kargs passed Jan 24 00:49:45.092013 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 24 00:49:45.087181 ignition[874]: Ignition finished successfully Jan 24 00:49:45.121104 ignition[880]: Ignition 2.24.0 Jan 24 00:49:45.121129 ignition[880]: Stage: disks Jan 24 00:49:45.121272 ignition[880]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:49:45.121283 ignition[880]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 24 00:49:45.123570 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 24 00:49:45.132636 kernel: audit: type=1130 audit(1769215785.124:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:45.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:45.122063 ignition[880]: disks: disks passed Jan 24 00:49:45.125425 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 24 00:49:45.122100 ignition[880]: Ignition finished successfully Jan 24 00:49:45.133344 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 24 00:49:45.134737 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 00:49:45.136328 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 00:49:45.137702 systemd[1]: Reached target basic.target - Basic System. Jan 24 00:49:45.140218 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 24 00:49:45.172641 systemd-fsck[889]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Jan 24 00:49:45.175520 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 24 00:49:45.186071 kernel: audit: type=1130 audit(1769215785.175:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:45.175000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:45.179054 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 24 00:49:45.300926 kernel: EXT4-fs (sda9): mounted filesystem 4e30a7d6-83d2-471c-98e0-68a57c0656af r/w with ordered data mode. Quota mode: none. Jan 24 00:49:45.301169 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 24 00:49:45.302374 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 24 00:49:45.304871 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 00:49:45.307967 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 24 00:49:45.310478 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 24 00:49:45.312024 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 24 00:49:45.312061 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 00:49:45.319668 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 24 00:49:45.322657 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 24 00:49:45.329925 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (897) Jan 24 00:49:45.335913 kernel: BTRFS info (device sda6): first mount of filesystem 98bf19d7-5744-4291-8d20-e6403ff726cc Jan 24 00:49:45.335960 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:49:45.345351 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 24 00:49:45.345394 kernel: BTRFS info (device sda6): turning on async discard Jan 24 00:49:45.345408 kernel: BTRFS info (device sda6): enabling free space tree Jan 24 00:49:45.349258 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 00:49:45.522649 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 24 00:49:45.531924 kernel: audit: type=1130 audit(1769215785.522:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:45.522000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:45.526133 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 24 00:49:45.533399 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 24 00:49:45.554939 kernel: BTRFS info (device sda6): last unmount of filesystem 98bf19d7-5744-4291-8d20-e6403ff726cc Jan 24 00:49:45.576163 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 24 00:49:45.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:45.585021 kernel: audit: type=1130 audit(1769215785.576:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:45.587454 ignition[996]: INFO : Ignition 2.24.0 Jan 24 00:49:45.587454 ignition[996]: INFO : Stage: mount Jan 24 00:49:45.589117 ignition[996]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:49:45.589117 ignition[996]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 24 00:49:45.593388 ignition[996]: INFO : mount: mount passed Jan 24 00:49:45.593388 ignition[996]: INFO : Ignition finished successfully Jan 24 00:49:45.593000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:45.592766 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 24 00:49:45.596008 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 24 00:49:45.617635 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 24 00:49:45.619270 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 00:49:45.642067 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (1007) Jan 24 00:49:45.642133 kernel: BTRFS info (device sda6): first mount of filesystem 98bf19d7-5744-4291-8d20-e6403ff726cc Jan 24 00:49:45.645367 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:49:45.653009 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 24 00:49:45.653037 kernel: BTRFS info (device sda6): turning on async discard Jan 24 00:49:45.655364 kernel: BTRFS info (device sda6): enabling free space tree Jan 24 00:49:45.659785 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 00:49:45.687757 ignition[1024]: INFO : Ignition 2.24.0 Jan 24 00:49:45.687757 ignition[1024]: INFO : Stage: files Jan 24 00:49:45.689541 ignition[1024]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:49:45.689541 ignition[1024]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 24 00:49:45.689541 ignition[1024]: DEBUG : files: compiled without relabeling support, skipping Jan 24 00:49:45.689541 ignition[1024]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 24 00:49:45.689541 ignition[1024]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 24 00:49:45.695067 ignition[1024]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 24 00:49:45.695067 ignition[1024]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 24 00:49:45.718939 ignition[1024]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 24 00:49:45.718939 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 24 00:49:45.718939 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 24 00:49:45.695100 unknown[1024]: wrote ssh authorized keys file for user: core Jan 24 00:49:45.932554 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 24 00:49:46.167690 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 24 00:49:46.169237 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 24 00:49:46.169237 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 24 00:49:46.169237 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 24 00:49:46.169237 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 24 00:49:46.169237 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 24 00:49:46.169237 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 24 00:49:46.169237 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 24 00:49:46.169237 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 24 00:49:46.178326 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 00:49:46.178326 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 00:49:46.178326 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 24 00:49:46.178326 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 24 00:49:46.178326 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 24 00:49:46.178326 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Jan 24 00:49:46.695607 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 24 00:49:47.478595 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 24 00:49:47.478595 ignition[1024]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 24 00:49:47.481929 ignition[1024]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 24 00:49:47.483199 ignition[1024]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 24 00:49:47.483199 ignition[1024]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 24 00:49:47.483199 ignition[1024]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 24 00:49:47.483199 ignition[1024]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 24 00:49:47.483199 ignition[1024]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 24 00:49:47.483199 ignition[1024]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 24 00:49:47.483199 ignition[1024]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Jan 24 00:49:47.483199 ignition[1024]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Jan 24 00:49:47.487000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:47.497318 ignition[1024]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 24 00:49:47.497318 ignition[1024]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 24 00:49:47.497318 ignition[1024]: INFO : files: files passed Jan 24 00:49:47.497318 ignition[1024]: INFO : Ignition finished successfully Jan 24 00:49:47.486731 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 24 00:49:47.490058 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 24 00:49:47.494621 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 24 00:49:47.507984 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 24 00:49:47.509000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:47.509178 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 24 00:49:47.510000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:47.517625 initrd-setup-root-after-ignition[1056]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:49:47.519057 initrd-setup-root-after-ignition[1056]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:49:47.521237 initrd-setup-root-after-ignition[1060]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:49:47.523826 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 00:49:47.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:47.526066 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 24 00:49:47.530762 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 24 00:49:47.578397 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 24 00:49:47.578523 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 24 00:49:47.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:47.579000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:47.580230 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 24 00:49:47.581591 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 24 00:49:47.584122 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 24 00:49:47.585015 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 24 00:49:47.612499 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 00:49:47.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:47.615227 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 24 00:49:47.631470 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 24 00:49:47.631675 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:49:47.632579 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:49:47.634349 systemd[1]: Stopped target timers.target - Timer Units. Jan 24 00:49:47.636000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:47.636005 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 24 00:49:47.636114 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 00:49:47.638090 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 24 00:49:47.639133 systemd[1]: Stopped target basic.target - Basic System. Jan 24 00:49:47.640521 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 24 00:49:47.642146 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 00:49:47.643614 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 24 00:49:47.645138 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 24 00:49:47.646870 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 24 00:49:47.648493 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 00:49:47.650161 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 24 00:49:47.651758 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 24 00:49:47.655000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:47.653352 systemd[1]: Stopped target swap.target - Swaps. Jan 24 00:49:47.654938 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 24 00:49:47.655085 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 24 00:49:47.656996 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:49:47.661000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:47.658033 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:49:47.684000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:47.659498 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 24 00:49:47.686000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:47.659712 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:49:47.661036 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 24 00:49:47.661174 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 24 00:49:47.663097 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 24 00:49:47.663207 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 00:49:47.685966 systemd[1]: ignition-files.service: Deactivated successfully. Jan 24 00:49:47.686111 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 24 00:49:47.689969 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 24 00:49:47.694954 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 24 00:49:47.698826 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 24 00:49:47.699000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:47.699025 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:49:47.700524 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 24 00:49:47.700669 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:49:47.703000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:47.707000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:47.704046 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 24 00:49:47.704148 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 00:49:47.716198 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 24 00:49:47.716316 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 24 00:49:47.717000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:47.717000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:47.723636 ignition[1080]: INFO : Ignition 2.24.0 Jan 24 00:49:47.723636 ignition[1080]: INFO : Stage: umount Jan 24 00:49:47.726962 ignition[1080]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:49:47.726962 ignition[1080]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 24 00:49:47.726962 ignition[1080]: INFO : umount: umount passed Jan 24 00:49:47.726962 ignition[1080]: INFO : Ignition finished successfully Jan 24 00:49:47.726000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:47.728000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:47.729000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:47.726628 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 24 00:49:47.732000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:47.726746 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 24 00:49:47.727753 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 24 00:49:47.735000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:47.727808 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 24 00:49:47.729318 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 24 00:49:47.729371 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 24 00:49:47.731965 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 24 00:49:47.732058 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 24 00:49:47.733027 systemd[1]: Stopped target network.target - Network. Jan 24 00:49:47.734342 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 24 00:49:47.734398 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 00:49:47.748000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:47.736968 systemd[1]: Stopped target paths.target - Path Units. Jan 24 00:49:47.749000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:47.737729 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 24 00:49:47.738907 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:49:47.740267 systemd[1]: Stopped target slices.target - Slice Units. Jan 24 00:49:47.741983 systemd[1]: Stopped target sockets.target - Socket Units. Jan 24 00:49:47.742676 systemd[1]: iscsid.socket: Deactivated successfully. Jan 24 00:49:47.742723 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 00:49:47.758000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:47.744988 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 24 00:49:47.760000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:47.745030 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 00:49:47.746494 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 24 00:49:47.762000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:47.746530 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 24 00:49:47.747344 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 24 00:49:47.747399 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 24 00:49:47.748975 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 24 00:49:47.765000 audit: BPF prog-id=9 op=UNLOAD Jan 24 00:49:47.766000 audit: BPF prog-id=6 op=UNLOAD Jan 24 00:49:47.749025 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 24 00:49:47.750631 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 24 00:49:47.752191 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 24 00:49:47.770000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:47.755549 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 24 00:49:47.756275 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 24 00:49:47.756384 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 24 00:49:47.760004 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 24 00:49:47.760129 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 24 00:49:47.775000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:47.762413 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 24 00:49:47.777000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:47.762522 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 24 00:49:47.779000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:47.766700 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 24 00:49:47.768394 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 24 00:49:47.768439 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:49:47.769961 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 24 00:49:47.770021 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 24 00:49:47.772351 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 24 00:49:47.773841 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 24 00:49:47.775971 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 00:49:47.776832 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 24 00:49:47.776903 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:49:47.778385 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 24 00:49:47.778437 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 24 00:49:47.780619 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:49:47.798929 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 24 00:49:47.799000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:47.799926 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:49:47.801133 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 24 00:49:47.801208 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 24 00:49:47.804000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:47.802634 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 24 00:49:47.805000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:47.802678 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:49:47.807000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:47.804174 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 24 00:49:47.804226 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 24 00:49:47.805694 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 24 00:49:47.813000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:47.805747 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 24 00:49:47.815000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:47.807451 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 24 00:49:47.817000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:47.807506 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:49:47.818000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:47.810024 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 24 00:49:47.812327 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 24 00:49:47.812385 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 24 00:49:47.814989 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 24 00:49:47.820000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:47.815044 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:49:47.846000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:47.816547 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 24 00:49:47.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:47.847000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:47.816599 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 00:49:47.818043 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 24 00:49:47.818092 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:49:47.819493 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:49:47.819546 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:49:47.821766 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 24 00:49:47.823978 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 24 00:49:47.847333 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 24 00:49:47.847432 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 24 00:49:47.849476 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 24 00:49:47.851851 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 24 00:49:47.867628 systemd[1]: Switching root. Jan 24 00:49:47.910696 systemd-journald[304]: Journal stopped Jan 24 00:49:49.172392 systemd-journald[304]: Received SIGTERM from PID 1 (systemd). Jan 24 00:49:49.172423 kernel: SELinux: policy capability network_peer_controls=1 Jan 24 00:49:49.172436 kernel: SELinux: policy capability open_perms=1 Jan 24 00:49:49.172446 kernel: SELinux: policy capability extended_socket_class=1 Jan 24 00:49:49.172456 kernel: SELinux: policy capability always_check_network=0 Jan 24 00:49:49.172468 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 24 00:49:49.172478 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 24 00:49:49.172488 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 24 00:49:49.172498 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 24 00:49:49.172508 kernel: SELinux: policy capability userspace_initial_context=0 Jan 24 00:49:49.172518 systemd[1]: Successfully loaded SELinux policy in 72.905ms. Jan 24 00:49:49.172532 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.825ms. Jan 24 00:49:49.172544 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 24 00:49:49.172555 systemd[1]: Detected virtualization kvm. Jan 24 00:49:49.172568 systemd[1]: Detected architecture x86-64. Jan 24 00:49:49.172579 systemd[1]: Detected first boot. Jan 24 00:49:49.172590 systemd[1]: Initializing machine ID from random generator. Jan 24 00:49:49.172600 zram_generator::config[1127]: No configuration found. Jan 24 00:49:49.172612 kernel: Guest personality initialized and is inactive Jan 24 00:49:49.172622 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 24 00:49:49.172634 kernel: Initialized host personality Jan 24 00:49:49.172644 kernel: NET: Registered PF_VSOCK protocol family Jan 24 00:49:49.172654 systemd[1]: Populated /etc with preset unit settings. Jan 24 00:49:49.172665 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 24 00:49:49.172676 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 24 00:49:49.172687 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 24 00:49:49.172701 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 24 00:49:49.172715 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 24 00:49:49.172726 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 24 00:49:49.172737 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 24 00:49:49.172748 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 24 00:49:49.172759 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 24 00:49:49.172770 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 24 00:49:49.172782 systemd[1]: Created slice user.slice - User and Session Slice. Jan 24 00:49:49.172793 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:49:49.172804 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:49:49.172815 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 24 00:49:49.172826 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 24 00:49:49.172837 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 24 00:49:49.172850 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 00:49:49.172864 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 24 00:49:49.172875 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:49:49.172901 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:49:49.172913 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 24 00:49:49.172924 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 24 00:49:49.172937 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 24 00:49:49.172948 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 24 00:49:49.172960 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:49:49.172971 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 00:49:49.172982 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 24 00:49:49.173002 systemd[1]: Reached target slices.target - Slice Units. Jan 24 00:49:49.173013 systemd[1]: Reached target swap.target - Swaps. Jan 24 00:49:49.173027 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 24 00:49:49.173038 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 24 00:49:49.173049 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 24 00:49:49.173060 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 24 00:49:49.173073 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 24 00:49:49.173084 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:49:49.173095 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 24 00:49:49.173107 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 24 00:49:49.173118 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 00:49:49.173129 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:49:49.173142 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 24 00:49:49.173153 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 24 00:49:49.173164 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 24 00:49:49.173176 systemd[1]: Mounting media.mount - External Media Directory... Jan 24 00:49:49.173187 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:49:49.173198 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 24 00:49:49.174900 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 24 00:49:49.174922 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 24 00:49:49.174935 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 24 00:49:49.174946 systemd[1]: Reached target machines.target - Containers. Jan 24 00:49:49.174957 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 24 00:49:49.174969 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:49:49.174980 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 00:49:49.174991 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 24 00:49:49.175004 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:49:49.175015 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 24 00:49:49.175026 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:49:49.175037 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 24 00:49:49.175048 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:49:49.175059 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 24 00:49:49.175072 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 24 00:49:49.175083 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 24 00:49:49.175094 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 24 00:49:49.175105 systemd[1]: Stopped systemd-fsck-usr.service. Jan 24 00:49:49.175116 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 24 00:49:49.175127 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 00:49:49.175139 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 00:49:49.175152 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 24 00:49:49.175164 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 24 00:49:49.175175 kernel: ACPI: bus type drm_connector registered Jan 24 00:49:49.175186 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 24 00:49:49.175197 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 00:49:49.175208 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:49:49.175221 kernel: fuse: init (API version 7.41) Jan 24 00:49:49.175232 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 24 00:49:49.175243 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 24 00:49:49.175254 systemd[1]: Mounted media.mount - External Media Directory. Jan 24 00:49:49.175265 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 24 00:49:49.175276 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 24 00:49:49.175287 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 24 00:49:49.175300 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 24 00:49:49.175331 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:49:49.175344 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 24 00:49:49.175355 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 24 00:49:49.175385 systemd-journald[1208]: Collecting audit messages is enabled. Jan 24 00:49:49.175410 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:49:49.175421 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:49:49.175432 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 24 00:49:49.175444 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 24 00:49:49.175455 systemd-journald[1208]: Journal started Jan 24 00:49:49.175476 systemd-journald[1208]: Runtime Journal (/run/log/journal/4cb235691de3441b990c03cd7035a449) is 8M, max 78.1M, 70.1M free. Jan 24 00:49:48.838000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 24 00:49:49.037000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:49.044000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:49.050000 audit: BPF prog-id=14 op=UNLOAD Jan 24 00:49:49.050000 audit: BPF prog-id=13 op=UNLOAD Jan 24 00:49:49.051000 audit: BPF prog-id=15 op=LOAD Jan 24 00:49:49.052000 audit: BPF prog-id=16 op=LOAD Jan 24 00:49:49.052000 audit: BPF prog-id=17 op=LOAD Jan 24 00:49:49.152000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:49.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:49.161000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 24 00:49:49.161000 audit[1208]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffddf62fd10 a2=4000 a3=0 items=0 ppid=1 pid=1208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:49:49.161000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 24 00:49:49.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:49.164000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:49.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:49.169000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:48.710495 systemd[1]: Queued start job for default target multi-user.target. Jan 24 00:49:49.175000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:49.175000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:48.723675 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 24 00:49:48.724230 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 24 00:49:49.179906 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 00:49:49.179000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:49.180601 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:49:49.180877 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:49:49.180000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:49.180000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:49.182303 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 24 00:49:49.182567 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 24 00:49:49.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:49.182000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:49.183727 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:49:49.184209 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:49:49.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:49.184000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:49.185478 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 00:49:49.185000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:49.187008 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 24 00:49:49.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:49.189059 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 24 00:49:49.188000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:49.190303 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 24 00:49:49.190000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:49.205495 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 24 00:49:49.208058 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 24 00:49:49.210006 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 24 00:49:49.213971 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 24 00:49:49.214709 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 24 00:49:49.214737 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 00:49:49.217598 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 24 00:49:49.219512 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:49:49.219691 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 24 00:49:49.226398 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 24 00:49:49.229059 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 24 00:49:49.231195 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 24 00:49:49.235328 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 24 00:49:49.237535 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 24 00:49:49.242102 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:49:49.246145 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 24 00:49:49.249408 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 24 00:49:49.253464 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 24 00:49:49.255079 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 24 00:49:49.257000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:49.261515 systemd-journald[1208]: Time spent on flushing to /var/log/journal/4cb235691de3441b990c03cd7035a449 is 75.966ms for 1124 entries. Jan 24 00:49:49.261515 systemd-journald[1208]: System Journal (/var/log/journal/4cb235691de3441b990c03cd7035a449) is 8M, max 588.1M, 580.1M free. Jan 24 00:49:49.353565 systemd-journald[1208]: Received client request to flush runtime journal. Jan 24 00:49:49.353612 kernel: loop1: detected capacity change from 0 to 50784 Jan 24 00:49:49.353639 kernel: loop2: detected capacity change from 0 to 8 Jan 24 00:49:49.283000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:49.338000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:49.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:49.256117 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 24 00:49:49.355000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:49.260319 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 24 00:49:49.272088 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 24 00:49:49.283435 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:49:49.331258 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Jan 24 00:49:49.331270 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Jan 24 00:49:49.338512 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 00:49:49.343089 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 24 00:49:49.345289 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 24 00:49:49.356008 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:49:49.359447 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 24 00:49:49.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:49.367910 kernel: loop3: detected capacity change from 0 to 111560 Jan 24 00:49:49.385566 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 24 00:49:49.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:49.387000 audit: BPF prog-id=18 op=LOAD Jan 24 00:49:49.388000 audit: BPF prog-id=19 op=LOAD Jan 24 00:49:49.388000 audit: BPF prog-id=20 op=LOAD Jan 24 00:49:49.390220 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 24 00:49:49.393000 audit: BPF prog-id=21 op=LOAD Jan 24 00:49:49.403903 kernel: loop4: detected capacity change from 0 to 219144 Jan 24 00:49:49.401000 audit: BPF prog-id=22 op=LOAD Jan 24 00:49:49.402000 audit: BPF prog-id=23 op=LOAD Jan 24 00:49:49.402000 audit: BPF prog-id=24 op=LOAD Jan 24 00:49:49.396132 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 00:49:49.400206 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 00:49:49.405017 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 24 00:49:49.409000 audit: BPF prog-id=25 op=LOAD Jan 24 00:49:49.409000 audit: BPF prog-id=26 op=LOAD Jan 24 00:49:49.409000 audit: BPF prog-id=27 op=LOAD Jan 24 00:49:49.414402 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 24 00:49:49.447958 systemd-tmpfiles[1274]: ACLs are not supported, ignoring. Jan 24 00:49:49.448219 systemd-tmpfiles[1274]: ACLs are not supported, ignoring. Jan 24 00:49:49.454537 kernel: loop5: detected capacity change from 0 to 50784 Jan 24 00:49:49.458000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:49.458676 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:49:49.474913 kernel: loop6: detected capacity change from 0 to 8 Jan 24 00:49:49.481901 kernel: loop7: detected capacity change from 0 to 111560 Jan 24 00:49:49.493083 systemd-nsresourced[1275]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 24 00:49:49.496216 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 24 00:49:49.496000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:49.511153 kernel: loop1: detected capacity change from 0 to 219144 Jan 24 00:49:49.520654 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 24 00:49:49.521000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:49.527604 (sd-merge)[1280]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-akamai.raw'. Jan 24 00:49:49.536469 (sd-merge)[1280]: Merged extensions into '/usr'. Jan 24 00:49:49.543495 systemd[1]: Reload requested from client PID 1249 ('systemd-sysext') (unit systemd-sysext.service)... Jan 24 00:49:49.543584 systemd[1]: Reloading... Jan 24 00:49:49.665053 systemd-oomd[1272]: No swap; memory pressure usage will be degraded Jan 24 00:49:49.671912 zram_generator::config[1328]: No configuration found. Jan 24 00:49:49.685044 systemd-resolved[1273]: Positive Trust Anchors: Jan 24 00:49:49.685060 systemd-resolved[1273]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 00:49:49.685065 systemd-resolved[1273]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 24 00:49:49.685094 systemd-resolved[1273]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 00:49:49.689986 systemd-resolved[1273]: Defaulting to hostname 'linux'. Jan 24 00:49:49.856310 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 24 00:49:49.856930 systemd[1]: Reloading finished in 312 ms. Jan 24 00:49:49.889748 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 24 00:49:49.891917 kernel: kauditd_printk_skb: 112 callbacks suppressed Jan 24 00:49:49.891955 kernel: audit: type=1130 audit(1769215789.890:149): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:49.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:49.890981 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 00:49:49.898000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:49.899305 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 24 00:49:49.904911 kernel: audit: type=1130 audit(1769215789.898:150): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:49.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:49.905598 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 24 00:49:49.910909 kernel: audit: type=1130 audit(1769215789.904:151): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:49.910000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:49.916207 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:49:49.938907 kernel: audit: type=1130 audit(1769215789.910:152): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:49.949159 systemd[1]: Starting ensure-sysext.service... Jan 24 00:49:49.952000 audit: BPF prog-id=8 op=UNLOAD Jan 24 00:49:49.952000 audit: BPF prog-id=7 op=UNLOAD Jan 24 00:49:49.953014 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 00:49:49.956092 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:49:49.956980 kernel: audit: type=1334 audit(1769215789.952:153): prog-id=8 op=UNLOAD Jan 24 00:49:49.957019 kernel: audit: type=1334 audit(1769215789.952:154): prog-id=7 op=UNLOAD Jan 24 00:49:49.958095 kernel: audit: type=1334 audit(1769215789.953:155): prog-id=28 op=LOAD Jan 24 00:49:49.958134 kernel: audit: type=1334 audit(1769215789.953:156): prog-id=29 op=LOAD Jan 24 00:49:49.953000 audit: BPF prog-id=28 op=LOAD Jan 24 00:49:49.953000 audit: BPF prog-id=29 op=LOAD Jan 24 00:49:49.967000 audit: BPF prog-id=30 op=LOAD Jan 24 00:49:49.970904 kernel: audit: type=1334 audit(1769215789.967:157): prog-id=30 op=LOAD Jan 24 00:49:49.967000 audit: BPF prog-id=18 op=UNLOAD Jan 24 00:49:49.967000 audit: BPF prog-id=31 op=LOAD Jan 24 00:49:49.967000 audit: BPF prog-id=32 op=LOAD Jan 24 00:49:49.968000 audit: BPF prog-id=19 op=UNLOAD Jan 24 00:49:49.968000 audit: BPF prog-id=20 op=UNLOAD Jan 24 00:49:49.970000 audit: BPF prog-id=33 op=LOAD Jan 24 00:49:49.970000 audit: BPF prog-id=15 op=UNLOAD Jan 24 00:49:49.970000 audit: BPF prog-id=34 op=LOAD Jan 24 00:49:49.970000 audit: BPF prog-id=35 op=LOAD Jan 24 00:49:49.970000 audit: BPF prog-id=16 op=UNLOAD Jan 24 00:49:49.970000 audit: BPF prog-id=17 op=UNLOAD Jan 24 00:49:49.971000 audit: BPF prog-id=36 op=LOAD Jan 24 00:49:49.971000 audit: BPF prog-id=25 op=UNLOAD Jan 24 00:49:49.971000 audit: BPF prog-id=37 op=LOAD Jan 24 00:49:49.971000 audit: BPF prog-id=38 op=LOAD Jan 24 00:49:49.971000 audit: BPF prog-id=26 op=UNLOAD Jan 24 00:49:49.971000 audit: BPF prog-id=27 op=UNLOAD Jan 24 00:49:49.973914 kernel: audit: type=1334 audit(1769215789.967:158): prog-id=18 op=UNLOAD Jan 24 00:49:49.973000 audit: BPF prog-id=39 op=LOAD Jan 24 00:49:49.973000 audit: BPF prog-id=21 op=UNLOAD Jan 24 00:49:49.976000 audit: BPF prog-id=40 op=LOAD Jan 24 00:49:49.976000 audit: BPF prog-id=22 op=UNLOAD Jan 24 00:49:49.976000 audit: BPF prog-id=41 op=LOAD Jan 24 00:49:49.976000 audit: BPF prog-id=42 op=LOAD Jan 24 00:49:49.976000 audit: BPF prog-id=23 op=UNLOAD Jan 24 00:49:49.976000 audit: BPF prog-id=24 op=UNLOAD Jan 24 00:49:49.986050 systemd[1]: Reload requested from client PID 1368 ('systemctl') (unit ensure-sysext.service)... Jan 24 00:49:49.986065 systemd[1]: Reloading... Jan 24 00:49:49.997688 systemd-udevd[1370]: Using default interface naming scheme 'v257'. Jan 24 00:49:50.001783 systemd-tmpfiles[1369]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 24 00:49:50.001825 systemd-tmpfiles[1369]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 24 00:49:50.003817 systemd-tmpfiles[1369]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 24 00:49:50.005507 systemd-tmpfiles[1369]: ACLs are not supported, ignoring. Jan 24 00:49:50.006225 systemd-tmpfiles[1369]: ACLs are not supported, ignoring. Jan 24 00:49:50.022534 systemd-tmpfiles[1369]: Detected autofs mount point /boot during canonicalization of boot. Jan 24 00:49:50.023356 systemd-tmpfiles[1369]: Skipping /boot Jan 24 00:49:50.049150 systemd-tmpfiles[1369]: Detected autofs mount point /boot during canonicalization of boot. Jan 24 00:49:50.049260 systemd-tmpfiles[1369]: Skipping /boot Jan 24 00:49:50.094909 zram_generator::config[1423]: No configuration found. Jan 24 00:49:50.229253 kernel: mousedev: PS/2 mouse device common for all mice Jan 24 00:49:50.254930 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 24 00:49:50.258908 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 24 00:49:50.266906 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 24 00:49:50.308902 kernel: ACPI: button: Power Button [PWRF] Jan 24 00:49:50.342961 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 24 00:49:50.343325 systemd[1]: Reloading finished in 356 ms. Jan 24 00:49:50.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:50.349557 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:49:50.355170 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:49:50.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:50.361000 audit: BPF prog-id=43 op=LOAD Jan 24 00:49:50.361000 audit: BPF prog-id=36 op=UNLOAD Jan 24 00:49:50.361000 audit: BPF prog-id=44 op=LOAD Jan 24 00:49:50.361000 audit: BPF prog-id=45 op=LOAD Jan 24 00:49:50.361000 audit: BPF prog-id=37 op=UNLOAD Jan 24 00:49:50.361000 audit: BPF prog-id=38 op=UNLOAD Jan 24 00:49:50.362000 audit: BPF prog-id=46 op=LOAD Jan 24 00:49:50.362000 audit: BPF prog-id=47 op=LOAD Jan 24 00:49:50.362000 audit: BPF prog-id=28 op=UNLOAD Jan 24 00:49:50.362000 audit: BPF prog-id=29 op=UNLOAD Jan 24 00:49:50.363000 audit: BPF prog-id=48 op=LOAD Jan 24 00:49:50.364000 audit: BPF prog-id=39 op=UNLOAD Jan 24 00:49:50.365000 audit: BPF prog-id=49 op=LOAD Jan 24 00:49:50.365000 audit: BPF prog-id=33 op=UNLOAD Jan 24 00:49:50.365000 audit: BPF prog-id=50 op=LOAD Jan 24 00:49:50.365000 audit: BPF prog-id=51 op=LOAD Jan 24 00:49:50.365000 audit: BPF prog-id=34 op=UNLOAD Jan 24 00:49:50.365000 audit: BPF prog-id=35 op=UNLOAD Jan 24 00:49:50.369000 audit: BPF prog-id=52 op=LOAD Jan 24 00:49:50.372000 audit: BPF prog-id=30 op=UNLOAD Jan 24 00:49:50.372000 audit: BPF prog-id=53 op=LOAD Jan 24 00:49:50.372000 audit: BPF prog-id=54 op=LOAD Jan 24 00:49:50.372000 audit: BPF prog-id=31 op=UNLOAD Jan 24 00:49:50.372000 audit: BPF prog-id=32 op=UNLOAD Jan 24 00:49:50.374000 audit: BPF prog-id=55 op=LOAD Jan 24 00:49:50.374000 audit: BPF prog-id=40 op=UNLOAD Jan 24 00:49:50.374000 audit: BPF prog-id=56 op=LOAD Jan 24 00:49:50.374000 audit: BPF prog-id=57 op=LOAD Jan 24 00:49:50.374000 audit: BPF prog-id=41 op=UNLOAD Jan 24 00:49:50.374000 audit: BPF prog-id=42 op=UNLOAD Jan 24 00:49:50.396481 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:49:50.399091 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 24 00:49:50.403084 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 24 00:49:50.403972 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:49:50.412334 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:49:50.420020 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 24 00:49:50.424691 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:49:50.428401 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:49:50.430426 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:49:50.431076 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 24 00:49:50.433935 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 24 00:49:50.435970 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 24 00:49:50.441053 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 24 00:49:50.445043 kernel: EDAC MC: Ver: 3.0.0 Jan 24 00:49:50.446000 audit: BPF prog-id=58 op=LOAD Jan 24 00:49:50.448293 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 00:49:50.457325 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 24 00:49:50.458616 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:49:50.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:50.460547 systemd[1]: Finished ensure-sysext.service. Jan 24 00:49:50.467000 audit: BPF prog-id=59 op=LOAD Jan 24 00:49:50.484058 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 24 00:49:50.488000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:50.487165 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 24 00:49:50.489245 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:49:50.489491 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:49:50.490000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:50.490000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:50.503101 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 24 00:49:50.520000 audit[1500]: SYSTEM_BOOT pid=1500 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 24 00:49:50.543182 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 24 00:49:50.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:50.583066 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 24 00:49:50.584000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:50.602000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:50.602000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:50.602687 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:49:50.603021 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:49:50.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:50.604000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:50.604434 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 24 00:49:50.604668 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 24 00:49:50.605837 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:49:50.606126 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:49:50.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:50.605000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:50.608577 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 24 00:49:50.608655 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 24 00:49:50.630199 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 24 00:49:50.642000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 24 00:49:50.642000 audit[1533]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffbe875c90 a2=420 a3=0 items=0 ppid=1485 pid=1533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:49:50.642000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 24 00:49:50.634418 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 24 00:49:50.645790 augenrules[1533]: No rules Jan 24 00:49:50.644860 systemd[1]: audit-rules.service: Deactivated successfully. Jan 24 00:49:50.651932 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 24 00:49:50.698096 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:49:50.711185 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 24 00:49:50.781405 systemd-networkd[1499]: lo: Link UP Jan 24 00:49:50.781414 systemd-networkd[1499]: lo: Gained carrier Jan 24 00:49:50.785352 systemd-networkd[1499]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 24 00:49:50.785361 systemd-networkd[1499]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:49:50.785424 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 00:49:50.786321 systemd[1]: Reached target network.target - Network. Jan 24 00:49:50.786347 systemd-networkd[1499]: eth0: Link UP Jan 24 00:49:50.787400 systemd-networkd[1499]: eth0: Gained carrier Jan 24 00:49:50.787420 systemd-networkd[1499]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 24 00:49:50.791412 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 24 00:49:50.794752 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 24 00:49:50.819961 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 24 00:49:50.821375 systemd[1]: Reached target time-set.target - System Time Set. Jan 24 00:49:50.829176 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 24 00:49:50.967998 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:49:51.091799 ldconfig[1495]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 24 00:49:51.095262 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 24 00:49:51.097671 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 24 00:49:51.123786 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 24 00:49:51.124778 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 00:49:51.125648 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 24 00:49:51.126468 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 24 00:49:51.127379 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 24 00:49:51.128295 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 24 00:49:51.129108 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 24 00:49:51.129899 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 24 00:49:51.130809 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 24 00:49:51.131565 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 24 00:49:51.132325 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 24 00:49:51.132361 systemd[1]: Reached target paths.target - Path Units. Jan 24 00:49:51.133043 systemd[1]: Reached target timers.target - Timer Units. Jan 24 00:49:51.134354 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 24 00:49:51.137102 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 24 00:49:51.139800 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 24 00:49:51.140725 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 24 00:49:51.141498 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 24 00:49:51.144506 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 24 00:49:51.145536 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 24 00:49:51.146879 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 24 00:49:51.148292 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 00:49:51.148991 systemd[1]: Reached target basic.target - Basic System. Jan 24 00:49:51.149708 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 24 00:49:51.149745 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 24 00:49:51.150715 systemd[1]: Starting containerd.service - containerd container runtime... Jan 24 00:49:51.154026 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 24 00:49:51.156131 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 24 00:49:51.160014 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 24 00:49:51.165186 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 24 00:49:51.169236 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 24 00:49:51.171333 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 24 00:49:51.175072 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 24 00:49:51.181707 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 24 00:49:51.186853 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 24 00:49:51.196109 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 24 00:49:51.201340 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 24 00:49:51.213282 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 24 00:49:51.215296 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 24 00:49:51.215743 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 24 00:49:51.217518 systemd[1]: Starting update-engine.service - Update Engine... Jan 24 00:49:51.218682 jq[1559]: false Jan 24 00:49:51.221555 google_oslogin_nss_cache[1561]: oslogin_cache_refresh[1561]: Refreshing passwd entry cache Jan 24 00:49:51.222091 oslogin_cache_refresh[1561]: Refreshing passwd entry cache Jan 24 00:49:51.225641 google_oslogin_nss_cache[1561]: oslogin_cache_refresh[1561]: Failure getting users, quitting Jan 24 00:49:51.226304 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 24 00:49:51.228931 oslogin_cache_refresh[1561]: Failure getting users, quitting Jan 24 00:49:51.230483 google_oslogin_nss_cache[1561]: oslogin_cache_refresh[1561]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 24 00:49:51.230483 google_oslogin_nss_cache[1561]: oslogin_cache_refresh[1561]: Refreshing group entry cache Jan 24 00:49:51.230483 google_oslogin_nss_cache[1561]: oslogin_cache_refresh[1561]: Failure getting groups, quitting Jan 24 00:49:51.230483 google_oslogin_nss_cache[1561]: oslogin_cache_refresh[1561]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 24 00:49:51.230209 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 24 00:49:51.228951 oslogin_cache_refresh[1561]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 24 00:49:51.228991 oslogin_cache_refresh[1561]: Refreshing group entry cache Jan 24 00:49:51.229444 oslogin_cache_refresh[1561]: Failure getting groups, quitting Jan 24 00:49:51.229454 oslogin_cache_refresh[1561]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 24 00:49:51.232288 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 24 00:49:51.232563 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 24 00:49:51.234428 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 24 00:49:51.235065 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 24 00:49:51.237515 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 24 00:49:51.239081 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 24 00:49:51.256920 update_engine[1570]: I20260124 00:49:51.256821 1570 main.cc:92] Flatcar Update Engine starting Jan 24 00:49:51.263090 extend-filesystems[1560]: Found /dev/sda6 Jan 24 00:49:51.282959 extend-filesystems[1560]: Found /dev/sda9 Jan 24 00:49:51.287557 jq[1574]: true Jan 24 00:49:51.293990 extend-filesystems[1560]: Checking size of /dev/sda9 Jan 24 00:49:51.303500 tar[1576]: linux-amd64/LICENSE Jan 24 00:49:51.306021 tar[1576]: linux-amd64/helm Jan 24 00:49:51.316763 dbus-daemon[1557]: [system] SELinux support is enabled Jan 24 00:49:51.316997 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 24 00:49:51.320410 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 24 00:49:51.320435 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 24 00:49:51.322120 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 24 00:49:51.322146 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 24 00:49:51.328299 systemd[1]: motdgen.service: Deactivated successfully. Jan 24 00:49:51.328604 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 24 00:49:51.339895 jq[1600]: true Jan 24 00:49:51.340308 extend-filesystems[1560]: Resized partition /dev/sda9 Jan 24 00:49:51.345376 update_engine[1570]: I20260124 00:49:51.343053 1570 update_check_scheduler.cc:74] Next update check in 2m4s Jan 24 00:49:51.343264 systemd[1]: Started update-engine.service - Update Engine. Jan 24 00:49:51.356923 extend-filesystems[1614]: resize2fs 1.47.3 (8-Jul-2025) Jan 24 00:49:51.359965 coreos-metadata[1556]: Jan 24 00:49:51.356 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Jan 24 00:49:51.366917 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 24 00:49:51.378912 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 19377147 blocks Jan 24 00:49:51.495044 systemd-logind[1569]: Watching system buttons on /dev/input/event2 (Power Button) Jan 24 00:49:51.495565 systemd-logind[1569]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 24 00:49:51.497005 systemd-logind[1569]: New seat seat0. Jan 24 00:49:51.499093 systemd[1]: Started systemd-logind.service - User Login Management. Jan 24 00:49:51.522666 bash[1629]: Updated "/home/core/.ssh/authorized_keys" Jan 24 00:49:51.526520 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 24 00:49:51.531195 systemd[1]: Starting sshkeys.service... Jan 24 00:49:51.546946 systemd-networkd[1499]: eth0: DHCPv4 address 172.239.50.209/24, gateway 172.239.50.1 acquired from 23.205.167.250 Jan 24 00:49:51.548781 dbus-daemon[1557]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1499 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 24 00:49:51.547912 systemd-timesyncd[1502]: Network configuration changed, trying to establish connection. Jan 24 00:49:51.555738 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 24 00:49:51.599477 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 24 00:49:51.609551 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 24 00:49:51.688303 containerd[1593]: time="2026-01-24T00:49:51Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 24 00:49:51.690833 containerd[1593]: time="2026-01-24T00:49:51.690493531Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 24 00:49:51.724942 containerd[1593]: time="2026-01-24T00:49:51.724197228Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.96µs" Jan 24 00:49:51.724942 containerd[1593]: time="2026-01-24T00:49:51.724226068Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 24 00:49:51.724942 containerd[1593]: time="2026-01-24T00:49:51.724258158Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 24 00:49:51.724942 containerd[1593]: time="2026-01-24T00:49:51.724269178Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 24 00:49:51.724942 containerd[1593]: time="2026-01-24T00:49:51.724419698Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 24 00:49:51.724942 containerd[1593]: time="2026-01-24T00:49:51.724433618Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 24 00:49:51.724942 containerd[1593]: time="2026-01-24T00:49:51.724493028Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 24 00:49:51.724942 containerd[1593]: time="2026-01-24T00:49:51.724504738Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 24 00:49:51.724942 containerd[1593]: time="2026-01-24T00:49:51.724685018Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 24 00:49:51.724942 containerd[1593]: time="2026-01-24T00:49:51.724697568Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 24 00:49:51.724942 containerd[1593]: time="2026-01-24T00:49:51.724707288Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 24 00:49:51.724942 containerd[1593]: time="2026-01-24T00:49:51.724715068Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 24 00:49:51.726309 containerd[1593]: time="2026-01-24T00:49:51.724875288Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 24 00:49:51.726309 containerd[1593]: time="2026-01-24T00:49:51.726255509Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 24 00:49:51.726399 containerd[1593]: time="2026-01-24T00:49:51.726373809Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 24 00:49:51.726691 containerd[1593]: time="2026-01-24T00:49:51.726669129Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 24 00:49:51.726722 containerd[1593]: time="2026-01-24T00:49:51.726712359Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 24 00:49:51.726743 containerd[1593]: time="2026-01-24T00:49:51.726722389Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 24 00:49:51.728348 containerd[1593]: time="2026-01-24T00:49:51.728178460Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 24 00:49:51.728414 containerd[1593]: time="2026-01-24T00:49:51.728394630Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 24 00:49:51.728924 containerd[1593]: time="2026-01-24T00:49:51.728466130Z" level=info msg="metadata content store policy set" policy=shared Jan 24 00:49:51.754908 containerd[1593]: time="2026-01-24T00:49:51.753736993Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 24 00:49:51.754908 containerd[1593]: time="2026-01-24T00:49:51.753775563Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 24 00:49:51.754908 containerd[1593]: time="2026-01-24T00:49:51.753844223Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 24 00:49:51.754908 containerd[1593]: time="2026-01-24T00:49:51.753855683Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 24 00:49:51.754908 containerd[1593]: time="2026-01-24T00:49:51.753866813Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 24 00:49:51.754908 containerd[1593]: time="2026-01-24T00:49:51.753876643Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 24 00:49:51.754908 containerd[1593]: time="2026-01-24T00:49:51.753903393Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 24 00:49:51.754908 containerd[1593]: time="2026-01-24T00:49:51.753913193Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 24 00:49:51.754908 containerd[1593]: time="2026-01-24T00:49:51.753923393Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 24 00:49:51.754908 containerd[1593]: time="2026-01-24T00:49:51.753933933Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 24 00:49:51.754908 containerd[1593]: time="2026-01-24T00:49:51.753942903Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 24 00:49:51.754908 containerd[1593]: time="2026-01-24T00:49:51.753951403Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 24 00:49:51.754908 containerd[1593]: time="2026-01-24T00:49:51.753959293Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 24 00:49:51.754908 containerd[1593]: time="2026-01-24T00:49:51.753969103Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 24 00:49:51.755160 containerd[1593]: time="2026-01-24T00:49:51.754075713Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 24 00:49:51.755160 containerd[1593]: time="2026-01-24T00:49:51.754093803Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 24 00:49:51.755160 containerd[1593]: time="2026-01-24T00:49:51.754105373Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 24 00:49:51.755160 containerd[1593]: time="2026-01-24T00:49:51.754114653Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 24 00:49:51.755160 containerd[1593]: time="2026-01-24T00:49:51.754123443Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 24 00:49:51.755160 containerd[1593]: time="2026-01-24T00:49:51.754132193Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 24 00:49:51.755160 containerd[1593]: time="2026-01-24T00:49:51.754141833Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 24 00:49:51.755160 containerd[1593]: time="2026-01-24T00:49:51.754150213Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 24 00:49:51.755160 containerd[1593]: time="2026-01-24T00:49:51.754159573Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 24 00:49:51.755160 containerd[1593]: time="2026-01-24T00:49:51.754168553Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 24 00:49:51.755160 containerd[1593]: time="2026-01-24T00:49:51.754187603Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 24 00:49:51.755160 containerd[1593]: time="2026-01-24T00:49:51.754206483Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 24 00:49:51.755160 containerd[1593]: time="2026-01-24T00:49:51.754240933Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 24 00:49:51.755160 containerd[1593]: time="2026-01-24T00:49:51.754252183Z" level=info msg="Start snapshots syncer" Jan 24 00:49:51.755160 containerd[1593]: time="2026-01-24T00:49:51.754273803Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 24 00:49:51.755439 containerd[1593]: time="2026-01-24T00:49:51.754525513Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 24 00:49:51.755439 containerd[1593]: time="2026-01-24T00:49:51.754561743Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 24 00:49:51.755553 containerd[1593]: time="2026-01-24T00:49:51.754600063Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 24 00:49:51.755553 containerd[1593]: time="2026-01-24T00:49:51.754701523Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 24 00:49:51.755553 containerd[1593]: time="2026-01-24T00:49:51.754725243Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 24 00:49:51.755553 containerd[1593]: time="2026-01-24T00:49:51.754734343Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 24 00:49:51.755553 containerd[1593]: time="2026-01-24T00:49:51.754744293Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 24 00:49:51.755553 containerd[1593]: time="2026-01-24T00:49:51.754754843Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 24 00:49:51.755553 containerd[1593]: time="2026-01-24T00:49:51.754763023Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 24 00:49:51.755553 containerd[1593]: time="2026-01-24T00:49:51.754771513Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 24 00:49:51.755553 containerd[1593]: time="2026-01-24T00:49:51.754780083Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 24 00:49:51.755553 containerd[1593]: time="2026-01-24T00:49:51.754789153Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 24 00:49:51.755553 containerd[1593]: time="2026-01-24T00:49:51.754814523Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 24 00:49:51.755553 containerd[1593]: time="2026-01-24T00:49:51.754828643Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 24 00:49:51.755553 containerd[1593]: time="2026-01-24T00:49:51.754836183Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 24 00:49:51.755807 containerd[1593]: time="2026-01-24T00:49:51.754844193Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 24 00:49:51.755807 containerd[1593]: time="2026-01-24T00:49:51.754851723Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 24 00:49:51.755807 containerd[1593]: time="2026-01-24T00:49:51.754861173Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 24 00:49:51.755807 containerd[1593]: time="2026-01-24T00:49:51.754869923Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 24 00:49:51.757197 containerd[1593]: time="2026-01-24T00:49:51.754880203Z" level=info msg="runtime interface created" Jan 24 00:49:51.757197 containerd[1593]: time="2026-01-24T00:49:51.757036014Z" level=info msg="created NRI interface" Jan 24 00:49:51.757197 containerd[1593]: time="2026-01-24T00:49:51.757050794Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 24 00:49:51.757197 containerd[1593]: time="2026-01-24T00:49:51.757119434Z" level=info msg="Connect containerd service" Jan 24 00:49:51.757197 containerd[1593]: time="2026-01-24T00:49:51.757139724Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 24 00:49:51.759474 containerd[1593]: time="2026-01-24T00:49:51.759454166Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 24 00:49:51.814910 kernel: EXT4-fs (sda9): resized filesystem to 19377147 Jan 24 00:49:51.816414 coreos-metadata[1640]: Jan 24 00:49:51.816 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Jan 24 00:49:51.828661 extend-filesystems[1614]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 24 00:49:51.828661 extend-filesystems[1614]: old_desc_blocks = 1, new_desc_blocks = 10 Jan 24 00:49:51.828661 extend-filesystems[1614]: The filesystem on /dev/sda9 is now 19377147 (4k) blocks long. Jan 24 00:49:51.836063 extend-filesystems[1560]: Resized filesystem in /dev/sda9 Jan 24 00:49:51.832133 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 24 00:49:51.833303 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 24 00:49:51.840682 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 24 00:49:51.847691 dbus-daemon[1557]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 24 00:49:51.849915 dbus-daemon[1557]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1639 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 24 00:49:51.857678 systemd[1]: Starting polkit.service - Authorization Manager... Jan 24 00:49:51.874806 locksmithd[1613]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 24 00:49:51.918911 coreos-metadata[1640]: Jan 24 00:49:51.917 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Jan 24 00:49:51.945908 containerd[1593]: time="2026-01-24T00:49:51.945189858Z" level=info msg="Start subscribing containerd event" Jan 24 00:49:51.945908 containerd[1593]: time="2026-01-24T00:49:51.945238458Z" level=info msg="Start recovering state" Jan 24 00:49:51.945908 containerd[1593]: time="2026-01-24T00:49:51.945326458Z" level=info msg="Start event monitor" Jan 24 00:49:51.945908 containerd[1593]: time="2026-01-24T00:49:51.945338868Z" level=info msg="Start cni network conf syncer for default" Jan 24 00:49:51.945908 containerd[1593]: time="2026-01-24T00:49:51.945349479Z" level=info msg="Start streaming server" Jan 24 00:49:51.945908 containerd[1593]: time="2026-01-24T00:49:51.945358819Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 24 00:49:51.945908 containerd[1593]: time="2026-01-24T00:49:51.945365789Z" level=info msg="runtime interface starting up..." Jan 24 00:49:51.945908 containerd[1593]: time="2026-01-24T00:49:51.945374039Z" level=info msg="starting plugins..." Jan 24 00:49:51.945908 containerd[1593]: time="2026-01-24T00:49:51.945388429Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 24 00:49:51.945908 containerd[1593]: time="2026-01-24T00:49:51.945602439Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 24 00:49:51.945908 containerd[1593]: time="2026-01-24T00:49:51.945659489Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 24 00:49:51.950229 systemd[1]: Started containerd.service - containerd container runtime. Jan 24 00:49:51.954305 containerd[1593]: time="2026-01-24T00:49:51.954280113Z" level=info msg="containerd successfully booted in 0.267776s" Jan 24 00:49:52.023420 polkitd[1661]: Started polkitd version 126 Jan 24 00:49:52.032280 polkitd[1661]: Loading rules from directory /etc/polkit-1/rules.d Jan 24 00:49:52.035172 polkitd[1661]: Loading rules from directory /run/polkit-1/rules.d Jan 24 00:49:52.035214 polkitd[1661]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 24 00:49:52.035403 polkitd[1661]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jan 24 00:49:52.035424 polkitd[1661]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 24 00:49:52.035458 polkitd[1661]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 24 00:49:52.038074 polkitd[1661]: Finished loading, compiling and executing 2 rules Jan 24 00:49:52.038694 systemd[1]: Started polkit.service - Authorization Manager. Jan 24 00:49:52.042346 dbus-daemon[1557]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 24 00:49:52.043184 polkitd[1661]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 24 00:49:52.062346 systemd-hostnamed[1639]: Hostname set to <172-239-50-209> (transient) Jan 24 00:49:52.062773 systemd-resolved[1273]: System hostname changed to '172-239-50-209'. Jan 24 00:49:52.090821 coreos-metadata[1640]: Jan 24 00:49:52.090 INFO Fetch successful Jan 24 00:49:52.108796 tar[1576]: linux-amd64/README.md Jan 24 00:49:52.122625 update-ssh-keys[1675]: Updated "/home/core/.ssh/authorized_keys" Jan 24 00:49:52.127046 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 24 00:49:52.134363 systemd[1]: Finished sshkeys.service. Jan 24 00:49:52.137164 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 24 00:49:52.158767 sshd_keygen[1590]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 24 00:49:52.180033 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 24 00:49:52.182765 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 24 00:49:52.198968 systemd[1]: issuegen.service: Deactivated successfully. Jan 24 00:49:52.199279 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 24 00:49:52.202071 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 24 00:49:52.218289 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 24 00:49:52.221061 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 24 00:49:52.225186 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 24 00:49:52.226071 systemd[1]: Reached target getty.target - Login Prompts. Jan 24 00:49:52.252052 systemd-networkd[1499]: eth0: Gained IPv6LL Jan 24 00:49:52.258467 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 24 00:49:52.259949 systemd[1]: Reached target network-online.target - Network is Online. Jan 24 00:49:52.262747 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:49:52.266173 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 24 00:49:52.288308 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 24 00:49:52.366894 coreos-metadata[1556]: Jan 24 00:49:52.366 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Jan 24 00:49:52.527174 coreos-metadata[1556]: Jan 24 00:49:52.526 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Jan 24 00:49:52.706155 coreos-metadata[1556]: Jan 24 00:49:52.706 INFO Fetch successful Jan 24 00:49:52.706155 coreos-metadata[1556]: Jan 24 00:49:52.706 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Jan 24 00:49:53.424272 systemd-timesyncd[1502]: Contacted time server 162.159.200.123:123 (0.flatcar.pool.ntp.org). Jan 24 00:49:53.424327 systemd-timesyncd[1502]: Initial clock synchronization to Sat 2026-01-24 00:49:53.424159 UTC. Jan 24 00:49:53.425094 systemd-resolved[1273]: Clock change detected. Flushing caches. Jan 24 00:49:53.645999 coreos-metadata[1556]: Jan 24 00:49:53.645 INFO Fetch successful Jan 24 00:49:53.771909 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 24 00:49:53.774624 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 24 00:49:53.879937 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:49:53.881982 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 24 00:49:53.884056 systemd[1]: Startup finished in 2.969s (kernel) + 6.267s (initrd) + 5.258s (userspace) = 14.495s. Jan 24 00:49:53.891240 (kubelet)[1736]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:49:54.361908 kubelet[1736]: E0124 00:49:54.361828 1736 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:49:54.366232 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:49:54.366432 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:49:54.366921 systemd[1]: kubelet.service: Consumed 848ms CPU time, 257.6M memory peak. Jan 24 00:49:55.381777 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 24 00:49:55.383164 systemd[1]: Started sshd@0-172.239.50.209:22-68.220.241.50:54034.service - OpenSSH per-connection server daemon (68.220.241.50:54034). Jan 24 00:49:55.561828 sshd[1748]: Accepted publickey for core from 68.220.241.50 port 54034 ssh2: RSA SHA256:IRjesLrG0hOviko3cFhvj9lmkvOjb9NLdxgdQ3AAleE Jan 24 00:49:55.563702 sshd-session[1748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:49:55.570417 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 24 00:49:55.572081 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 24 00:49:55.578134 systemd-logind[1569]: New session 1 of user core. Jan 24 00:49:55.591850 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 24 00:49:55.595212 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 24 00:49:55.614549 (systemd)[1754]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:49:55.617459 systemd-logind[1569]: New session 2 of user core. Jan 24 00:49:55.782192 systemd[1754]: Queued start job for default target default.target. Jan 24 00:49:55.792463 systemd[1754]: Created slice app.slice - User Application Slice. Jan 24 00:49:55.792498 systemd[1754]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 24 00:49:55.792515 systemd[1754]: Reached target paths.target - Paths. Jan 24 00:49:55.793050 systemd[1754]: Reached target timers.target - Timers. Jan 24 00:49:55.794674 systemd[1754]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 24 00:49:55.797952 systemd[1754]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 24 00:49:55.819935 systemd[1754]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 24 00:49:55.821487 systemd[1754]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 24 00:49:55.821624 systemd[1754]: Reached target sockets.target - Sockets. Jan 24 00:49:55.821674 systemd[1754]: Reached target basic.target - Basic System. Jan 24 00:49:55.821726 systemd[1754]: Reached target default.target - Main User Target. Jan 24 00:49:55.821762 systemd[1754]: Startup finished in 199ms. Jan 24 00:49:55.822106 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 24 00:49:55.827201 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 24 00:49:55.906267 systemd[1]: Started sshd@1-172.239.50.209:22-68.220.241.50:54038.service - OpenSSH per-connection server daemon (68.220.241.50:54038). Jan 24 00:49:56.057581 sshd[1768]: Accepted publickey for core from 68.220.241.50 port 54038 ssh2: RSA SHA256:IRjesLrG0hOviko3cFhvj9lmkvOjb9NLdxgdQ3AAleE Jan 24 00:49:56.059771 sshd-session[1768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:49:56.065854 systemd-logind[1569]: New session 3 of user core. Jan 24 00:49:56.072989 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 24 00:49:56.126904 sshd[1772]: Connection closed by 68.220.241.50 port 54038 Jan 24 00:49:56.128971 sshd-session[1768]: pam_unix(sshd:session): session closed for user core Jan 24 00:49:56.133848 systemd-logind[1569]: Session 3 logged out. Waiting for processes to exit. Jan 24 00:49:56.134144 systemd[1]: sshd@1-172.239.50.209:22-68.220.241.50:54038.service: Deactivated successfully. Jan 24 00:49:56.136171 systemd[1]: session-3.scope: Deactivated successfully. Jan 24 00:49:56.138031 systemd-logind[1569]: Removed session 3. Jan 24 00:49:56.165439 systemd[1]: Started sshd@2-172.239.50.209:22-68.220.241.50:54046.service - OpenSSH per-connection server daemon (68.220.241.50:54046). Jan 24 00:49:56.320109 sshd[1778]: Accepted publickey for core from 68.220.241.50 port 54046 ssh2: RSA SHA256:IRjesLrG0hOviko3cFhvj9lmkvOjb9NLdxgdQ3AAleE Jan 24 00:49:56.321975 sshd-session[1778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:49:56.327615 systemd-logind[1569]: New session 4 of user core. Jan 24 00:49:56.333992 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 24 00:49:56.390275 sshd[1782]: Connection closed by 68.220.241.50 port 54046 Jan 24 00:49:56.390724 sshd-session[1778]: pam_unix(sshd:session): session closed for user core Jan 24 00:49:56.396200 systemd[1]: sshd@2-172.239.50.209:22-68.220.241.50:54046.service: Deactivated successfully. Jan 24 00:49:56.398235 systemd[1]: session-4.scope: Deactivated successfully. Jan 24 00:49:56.399308 systemd-logind[1569]: Session 4 logged out. Waiting for processes to exit. Jan 24 00:49:56.401074 systemd-logind[1569]: Removed session 4. Jan 24 00:49:56.422189 systemd[1]: Started sshd@3-172.239.50.209:22-68.220.241.50:54050.service - OpenSSH per-connection server daemon (68.220.241.50:54050). Jan 24 00:49:56.584467 sshd[1788]: Accepted publickey for core from 68.220.241.50 port 54050 ssh2: RSA SHA256:IRjesLrG0hOviko3cFhvj9lmkvOjb9NLdxgdQ3AAleE Jan 24 00:49:56.585155 sshd-session[1788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:49:56.590846 systemd-logind[1569]: New session 5 of user core. Jan 24 00:49:56.597141 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 24 00:49:56.663970 sshd[1792]: Connection closed by 68.220.241.50 port 54050 Jan 24 00:49:56.664435 sshd-session[1788]: pam_unix(sshd:session): session closed for user core Jan 24 00:49:56.669309 systemd-logind[1569]: Session 5 logged out. Waiting for processes to exit. Jan 24 00:49:56.669818 systemd[1]: sshd@3-172.239.50.209:22-68.220.241.50:54050.service: Deactivated successfully. Jan 24 00:49:56.671704 systemd[1]: session-5.scope: Deactivated successfully. Jan 24 00:49:56.673639 systemd-logind[1569]: Removed session 5. Jan 24 00:49:56.700723 systemd[1]: Started sshd@4-172.239.50.209:22-68.220.241.50:54056.service - OpenSSH per-connection server daemon (68.220.241.50:54056). Jan 24 00:49:56.860717 sshd[1799]: Accepted publickey for core from 68.220.241.50 port 54056 ssh2: RSA SHA256:IRjesLrG0hOviko3cFhvj9lmkvOjb9NLdxgdQ3AAleE Jan 24 00:49:56.863107 sshd-session[1799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:49:56.868716 systemd-logind[1569]: New session 6 of user core. Jan 24 00:49:56.873960 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 24 00:49:56.926858 sudo[1804]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 24 00:49:56.927281 sudo[1804]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:49:56.945896 sudo[1804]: pam_unix(sudo:session): session closed for user root Jan 24 00:49:56.968914 sshd[1803]: Connection closed by 68.220.241.50 port 54056 Jan 24 00:49:56.970013 sshd-session[1799]: pam_unix(sshd:session): session closed for user core Jan 24 00:49:56.974350 systemd-logind[1569]: Session 6 logged out. Waiting for processes to exit. Jan 24 00:49:56.975121 systemd[1]: sshd@4-172.239.50.209:22-68.220.241.50:54056.service: Deactivated successfully. Jan 24 00:49:56.977386 systemd[1]: session-6.scope: Deactivated successfully. Jan 24 00:49:56.979452 systemd-logind[1569]: Removed session 6. Jan 24 00:49:57.005823 systemd[1]: Started sshd@5-172.239.50.209:22-68.220.241.50:54072.service - OpenSSH per-connection server daemon (68.220.241.50:54072). Jan 24 00:49:57.166991 sshd[1811]: Accepted publickey for core from 68.220.241.50 port 54072 ssh2: RSA SHA256:IRjesLrG0hOviko3cFhvj9lmkvOjb9NLdxgdQ3AAleE Jan 24 00:49:57.169436 sshd-session[1811]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:49:57.175046 systemd-logind[1569]: New session 7 of user core. Jan 24 00:49:57.179948 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 24 00:49:57.221694 sudo[1817]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 24 00:49:57.222105 sudo[1817]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:49:57.224602 sudo[1817]: pam_unix(sudo:session): session closed for user root Jan 24 00:49:57.231446 sudo[1816]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 24 00:49:57.231786 sudo[1816]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:49:57.240263 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 24 00:49:57.285000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 24 00:49:57.286627 augenrules[1841]: No rules Jan 24 00:49:57.288070 kernel: kauditd_printk_skb: 74 callbacks suppressed Jan 24 00:49:57.288098 kernel: audit: type=1305 audit(1769215797.285:231): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 24 00:49:57.285000 audit[1841]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc267ce380 a2=420 a3=0 items=0 ppid=1822 pid=1841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:49:57.292618 systemd[1]: audit-rules.service: Deactivated successfully. Jan 24 00:49:57.293114 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 24 00:49:57.294816 kernel: audit: type=1300 audit(1769215797.285:231): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc267ce380 a2=420 a3=0 items=0 ppid=1822 pid=1841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:49:57.298005 sudo[1816]: pam_unix(sudo:session): session closed for user root Jan 24 00:49:57.285000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 24 00:49:57.301073 kernel: audit: type=1327 audit(1769215797.285:231): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 24 00:49:57.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:57.305358 kernel: audit: type=1130 audit(1769215797.293:232): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:57.293000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:57.310976 kernel: audit: type=1131 audit(1769215797.293:233): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:57.293000 audit[1816]: USER_END pid=1816 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 00:49:57.316939 kernel: audit: type=1106 audit(1769215797.293:234): pid=1816 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 00:49:57.293000 audit[1816]: CRED_DISP pid=1816 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 00:49:57.323424 kernel: audit: type=1104 audit(1769215797.293:235): pid=1816 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 00:49:57.323994 sshd[1815]: Connection closed by 68.220.241.50 port 54072 Jan 24 00:49:57.324356 sshd-session[1811]: pam_unix(sshd:session): session closed for user core Jan 24 00:49:57.328000 audit[1811]: USER_END pid=1811 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:49:57.340665 systemd[1]: sshd@5-172.239.50.209:22-68.220.241.50:54072.service: Deactivated successfully. Jan 24 00:49:57.349951 kernel: audit: type=1106 audit(1769215797.328:236): pid=1811 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:49:57.349990 kernel: audit: type=1104 audit(1769215797.328:237): pid=1811 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:49:57.328000 audit[1811]: CRED_DISP pid=1811 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:49:57.347987 systemd[1]: session-7.scope: Deactivated successfully. Jan 24 00:49:57.354828 kernel: audit: type=1131 audit(1769215797.338:238): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.239.50.209:22-68.220.241.50:54072 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:57.338000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.239.50.209:22-68.220.241.50:54072 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:57.352027 systemd-logind[1569]: Session 7 logged out. Waiting for processes to exit. Jan 24 00:49:57.362711 systemd[1]: Started sshd@6-172.239.50.209:22-68.220.241.50:54074.service - OpenSSH per-connection server daemon (68.220.241.50:54074). Jan 24 00:49:57.362000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.239.50.209:22-68.220.241.50:54074 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:57.364359 systemd-logind[1569]: Removed session 7. Jan 24 00:49:57.526000 audit[1851]: USER_ACCT pid=1851 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:49:57.527203 sshd[1851]: Accepted publickey for core from 68.220.241.50 port 54074 ssh2: RSA SHA256:IRjesLrG0hOviko3cFhvj9lmkvOjb9NLdxgdQ3AAleE Jan 24 00:49:57.528000 audit[1851]: CRED_ACQ pid=1851 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:49:57.528000 audit[1851]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe5a9539a0 a2=3 a3=0 items=0 ppid=1 pid=1851 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:49:57.528000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:49:57.529670 sshd-session[1851]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:49:57.535365 systemd-logind[1569]: New session 8 of user core. Jan 24 00:49:57.542943 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 24 00:49:57.545000 audit[1851]: USER_START pid=1851 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:49:57.546000 audit[1855]: CRED_ACQ pid=1855 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:49:57.578000 audit[1856]: USER_ACCT pid=1856 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 00:49:57.579650 sudo[1856]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 24 00:49:57.579000 audit[1856]: CRED_REFR pid=1856 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 00:49:57.580024 sudo[1856]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:49:57.579000 audit[1856]: USER_START pid=1856 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 00:49:57.933886 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 24 00:49:57.945103 (dockerd)[1874]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 24 00:49:58.210422 dockerd[1874]: time="2026-01-24T00:49:58.209963357Z" level=info msg="Starting up" Jan 24 00:49:58.212647 dockerd[1874]: time="2026-01-24T00:49:58.212628339Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 24 00:49:58.224683 dockerd[1874]: time="2026-01-24T00:49:58.224562565Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 24 00:49:58.243498 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1426208599-merged.mount: Deactivated successfully. Jan 24 00:49:58.280233 systemd[1]: var-lib-docker-metacopy\x2dcheck2659942715-merged.mount: Deactivated successfully. Jan 24 00:49:58.305816 dockerd[1874]: time="2026-01-24T00:49:58.305651585Z" level=info msg="Loading containers: start." Jan 24 00:49:58.314829 kernel: Initializing XFRM netlink socket Jan 24 00:49:58.373000 audit[1923]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1923 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:49:58.373000 audit[1923]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffe6834dba0 a2=0 a3=0 items=0 ppid=1874 pid=1923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:49:58.373000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 24 00:49:58.376000 audit[1925]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1925 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:49:58.376000 audit[1925]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffdebd24d60 a2=0 a3=0 items=0 ppid=1874 pid=1925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:49:58.376000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 24 00:49:58.378000 audit[1927]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1927 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:49:58.378000 audit[1927]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe06491ca0 a2=0 a3=0 items=0 ppid=1874 pid=1927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:49:58.378000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 24 00:49:58.381000 audit[1929]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1929 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:49:58.381000 audit[1929]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffea641f760 a2=0 a3=0 items=0 ppid=1874 pid=1929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:49:58.381000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 24 00:49:58.383000 audit[1931]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1931 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:49:58.383000 audit[1931]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc86d29180 a2=0 a3=0 items=0 ppid=1874 pid=1931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:49:58.383000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 24 00:49:58.385000 audit[1933]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1933 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:49:58.385000 audit[1933]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd27d5a030 a2=0 a3=0 items=0 ppid=1874 pid=1933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:49:58.385000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 24 00:49:58.388000 audit[1935]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1935 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:49:58.388000 audit[1935]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd72dfd130 a2=0 a3=0 items=0 ppid=1874 pid=1935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:49:58.388000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 24 00:49:58.391000 audit[1937]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1937 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:49:58.391000 audit[1937]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7fffeee6d470 a2=0 a3=0 items=0 ppid=1874 pid=1937 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:49:58.391000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 24 00:49:58.418000 audit[1940]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1940 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:49:58.418000 audit[1940]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7ffdf0cb6970 a2=0 a3=0 items=0 ppid=1874 pid=1940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:49:58.418000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 24 00:49:58.421000 audit[1942]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1942 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:49:58.421000 audit[1942]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffcecc4f3f0 a2=0 a3=0 items=0 ppid=1874 pid=1942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:49:58.421000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 24 00:49:58.424000 audit[1944]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1944 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:49:58.424000 audit[1944]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7fff41bb6c70 a2=0 a3=0 items=0 ppid=1874 pid=1944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:49:58.424000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 24 00:49:58.426000 audit[1946]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1946 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:49:58.426000 audit[1946]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7fff20c41db0 a2=0 a3=0 items=0 ppid=1874 pid=1946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:49:58.426000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 24 00:49:58.429000 audit[1948]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1948 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:49:58.429000 audit[1948]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffffd6e78c0 a2=0 a3=0 items=0 ppid=1874 pid=1948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:49:58.429000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 24 00:49:58.471000 audit[1978]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=1978 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:49:58.471000 audit[1978]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fffec4c3650 a2=0 a3=0 items=0 ppid=1874 pid=1978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:49:58.471000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 24 00:49:58.473000 audit[1980]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=1980 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:49:58.473000 audit[1980]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffcbb1b0f10 a2=0 a3=0 items=0 ppid=1874 pid=1980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:49:58.473000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 24 00:49:58.475000 audit[1982]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=1982 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:49:58.475000 audit[1982]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc8b0aca70 a2=0 a3=0 items=0 ppid=1874 pid=1982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:49:58.475000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 24 00:49:58.478000 audit[1984]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=1984 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:49:58.478000 audit[1984]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc18e1a370 a2=0 a3=0 items=0 ppid=1874 pid=1984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:49:58.478000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 24 00:49:58.480000 audit[1986]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=1986 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:49:58.480000 audit[1986]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd9aca1190 a2=0 a3=0 items=0 ppid=1874 pid=1986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:49:58.480000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 24 00:49:58.482000 audit[1988]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=1988 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:49:58.482000 audit[1988]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffcfe807b30 a2=0 a3=0 items=0 ppid=1874 pid=1988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:49:58.482000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 24 00:49:58.484000 audit[1990]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=1990 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:49:58.484000 audit[1990]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffedff76450 a2=0 a3=0 items=0 ppid=1874 pid=1990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:49:58.484000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 24 00:49:58.487000 audit[1992]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=1992 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:49:58.487000 audit[1992]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7fffc4cae580 a2=0 a3=0 items=0 ppid=1874 pid=1992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:49:58.487000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 24 00:49:58.490000 audit[1994]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=1994 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:49:58.490000 audit[1994]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffd9569c700 a2=0 a3=0 items=0 ppid=1874 pid=1994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:49:58.490000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 24 00:49:58.492000 audit[1996]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=1996 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:49:58.492000 audit[1996]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffe23b43040 a2=0 a3=0 items=0 ppid=1874 pid=1996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:49:58.492000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 24 00:49:58.494000 audit[1998]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=1998 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:49:58.494000 audit[1998]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffcc706b0b0 a2=0 a3=0 items=0 ppid=1874 pid=1998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:49:58.494000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 24 00:49:58.497000 audit[2000]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2000 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:49:58.497000 audit[2000]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffe07397040 a2=0 a3=0 items=0 ppid=1874 pid=2000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:49:58.497000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 24 00:49:58.499000 audit[2002]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2002 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:49:58.499000 audit[2002]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffe707b5c50 a2=0 a3=0 items=0 ppid=1874 pid=2002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:49:58.499000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 24 00:49:58.505000 audit[2007]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2007 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:49:58.505000 audit[2007]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffed21f6380 a2=0 a3=0 items=0 ppid=1874 pid=2007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:49:58.505000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 24 00:49:58.508000 audit[2009]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2009 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:49:58.508000 audit[2009]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7fffd358a630 a2=0 a3=0 items=0 ppid=1874 pid=2009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:49:58.508000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 24 00:49:58.510000 audit[2011]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2011 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:49:58.510000 audit[2011]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffd06b961f0 a2=0 a3=0 items=0 ppid=1874 pid=2011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:49:58.510000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 24 00:49:58.512000 audit[2013]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2013 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:49:58.512000 audit[2013]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc33384500 a2=0 a3=0 items=0 ppid=1874 pid=2013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:49:58.512000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 24 00:49:58.515000 audit[2015]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2015 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:49:58.515000 audit[2015]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffcc5e81c00 a2=0 a3=0 items=0 ppid=1874 pid=2015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:49:58.515000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 24 00:49:58.517000 audit[2017]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2017 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:49:58.517000 audit[2017]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffd906f1c90 a2=0 a3=0 items=0 ppid=1874 pid=2017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:49:58.517000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 24 00:49:58.536000 audit[2022]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2022 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:49:58.536000 audit[2022]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffcef3c2240 a2=0 a3=0 items=0 ppid=1874 pid=2022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:49:58.536000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 24 00:49:58.539000 audit[2025]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2025 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:49:58.539000 audit[2025]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffe7e2335e0 a2=0 a3=0 items=0 ppid=1874 pid=2025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:49:58.539000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 24 00:49:58.549000 audit[2033]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2033 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:49:58.549000 audit[2033]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7fffc12bc820 a2=0 a3=0 items=0 ppid=1874 pid=2033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:49:58.549000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 24 00:49:58.561000 audit[2039]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2039 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:49:58.561000 audit[2039]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffdd8d25320 a2=0 a3=0 items=0 ppid=1874 pid=2039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:49:58.561000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 24 00:49:58.564000 audit[2041]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2041 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:49:58.564000 audit[2041]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7fffc299fbb0 a2=0 a3=0 items=0 ppid=1874 pid=2041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:49:58.564000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 24 00:49:58.566000 audit[2043]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2043 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:49:58.566000 audit[2043]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff82fef2b0 a2=0 a3=0 items=0 ppid=1874 pid=2043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:49:58.566000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 24 00:49:58.569000 audit[2045]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2045 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:49:58.569000 audit[2045]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7fff8c41f920 a2=0 a3=0 items=0 ppid=1874 pid=2045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:49:58.569000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 24 00:49:58.572000 audit[2047]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2047 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:49:58.572000 audit[2047]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffe7dd92590 a2=0 a3=0 items=0 ppid=1874 pid=2047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:49:58.572000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 24 00:49:58.573545 systemd-networkd[1499]: docker0: Link UP Jan 24 00:49:58.579559 dockerd[1874]: time="2026-01-24T00:49:58.579524192Z" level=info msg="Loading containers: done." Jan 24 00:49:58.598267 dockerd[1874]: time="2026-01-24T00:49:58.598223001Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 24 00:49:58.598393 dockerd[1874]: time="2026-01-24T00:49:58.598285131Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 24 00:49:58.598393 dockerd[1874]: time="2026-01-24T00:49:58.598358651Z" level=info msg="Initializing buildkit" Jan 24 00:49:58.624240 dockerd[1874]: time="2026-01-24T00:49:58.624212014Z" level=info msg="Completed buildkit initialization" Jan 24 00:49:58.630567 dockerd[1874]: time="2026-01-24T00:49:58.630541458Z" level=info msg="Daemon has completed initialization" Jan 24 00:49:58.630750 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 24 00:49:58.630000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:49:58.631734 dockerd[1874]: time="2026-01-24T00:49:58.631705438Z" level=info msg="API listen on /run/docker.sock" Jan 24 00:49:59.232048 containerd[1593]: time="2026-01-24T00:49:59.232014008Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Jan 24 00:49:59.238485 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1527165788-merged.mount: Deactivated successfully. Jan 24 00:50:00.242383 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3615970350.mount: Deactivated successfully. Jan 24 00:50:00.919191 containerd[1593]: time="2026-01-24T00:50:00.919143571Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:50:00.920105 containerd[1593]: time="2026-01-24T00:50:00.920026602Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=25399329" Jan 24 00:50:00.920565 containerd[1593]: time="2026-01-24T00:50:00.920541132Z" level=info msg="ImageCreate event name:\"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:50:00.923377 containerd[1593]: time="2026-01-24T00:50:00.922400333Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:50:00.923377 containerd[1593]: time="2026-01-24T00:50:00.923266273Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"27064672\" in 1.691220035s" Jan 24 00:50:00.923377 containerd[1593]: time="2026-01-24T00:50:00.923290493Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\"" Jan 24 00:50:00.924141 containerd[1593]: time="2026-01-24T00:50:00.924119074Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Jan 24 00:50:02.046005 containerd[1593]: time="2026-01-24T00:50:02.045905514Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:50:02.047058 containerd[1593]: time="2026-01-24T00:50:02.046883295Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=21154285" Jan 24 00:50:02.047631 containerd[1593]: time="2026-01-24T00:50:02.047606025Z" level=info msg="ImageCreate event name:\"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:50:02.049675 containerd[1593]: time="2026-01-24T00:50:02.049645756Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:50:02.050557 containerd[1593]: time="2026-01-24T00:50:02.050534596Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"22819474\" in 1.126312232s" Jan 24 00:50:02.050641 containerd[1593]: time="2026-01-24T00:50:02.050625246Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\"" Jan 24 00:50:02.051256 containerd[1593]: time="2026-01-24T00:50:02.051133497Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Jan 24 00:50:02.977700 containerd[1593]: time="2026-01-24T00:50:02.977643810Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:50:02.978460 containerd[1593]: time="2026-01-24T00:50:02.978437950Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=0" Jan 24 00:50:02.978950 containerd[1593]: time="2026-01-24T00:50:02.978913220Z" level=info msg="ImageCreate event name:\"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:50:02.982251 containerd[1593]: time="2026-01-24T00:50:02.981110991Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:50:02.982251 containerd[1593]: time="2026-01-24T00:50:02.981918782Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"17382979\" in 930.760285ms" Jan 24 00:50:02.982251 containerd[1593]: time="2026-01-24T00:50:02.981940452Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\"" Jan 24 00:50:02.982474 containerd[1593]: time="2026-01-24T00:50:02.982457242Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Jan 24 00:50:03.909888 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3799082430.mount: Deactivated successfully. Jan 24 00:50:04.154137 containerd[1593]: time="2026-01-24T00:50:04.154095897Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:50:04.155044 containerd[1593]: time="2026-01-24T00:50:04.154832368Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=0" Jan 24 00:50:04.155524 containerd[1593]: time="2026-01-24T00:50:04.155498178Z" level=info msg="ImageCreate event name:\"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:50:04.156948 containerd[1593]: time="2026-01-24T00:50:04.156928229Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:50:04.157491 containerd[1593]: time="2026-01-24T00:50:04.157469949Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"25964312\" in 1.174844927s" Jan 24 00:50:04.157567 containerd[1593]: time="2026-01-24T00:50:04.157552199Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\"" Jan 24 00:50:04.158060 containerd[1593]: time="2026-01-24T00:50:04.158039249Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Jan 24 00:50:04.616846 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 24 00:50:04.619225 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:50:04.661431 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount183775732.mount: Deactivated successfully. Jan 24 00:50:04.853381 kernel: kauditd_printk_skb: 132 callbacks suppressed Jan 24 00:50:04.853468 kernel: audit: type=1130 audit(1769215804.844:289): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:50:04.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:50:04.844992 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:50:04.862615 (kubelet)[2181]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:50:04.907295 kubelet[2181]: E0124 00:50:04.907183 2181 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:50:04.912000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 24 00:50:04.912430 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:50:04.912618 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:50:04.913111 systemd[1]: kubelet.service: Consumed 190ms CPU time, 112.2M memory peak. Jan 24 00:50:04.919919 kernel: audit: type=1131 audit(1769215804.912:290): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 24 00:50:05.354175 containerd[1593]: time="2026-01-24T00:50:05.354035997Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:50:05.355834 containerd[1593]: time="2026-01-24T00:50:05.355710588Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=129107" Jan 24 00:50:05.356459 containerd[1593]: time="2026-01-24T00:50:05.356418608Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:50:05.359637 containerd[1593]: time="2026-01-24T00:50:05.358604539Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:50:05.359637 containerd[1593]: time="2026-01-24T00:50:05.359455270Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.201390491s" Jan 24 00:50:05.359637 containerd[1593]: time="2026-01-24T00:50:05.359477270Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Jan 24 00:50:05.359953 containerd[1593]: time="2026-01-24T00:50:05.359937020Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Jan 24 00:50:05.797534 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3248380859.mount: Deactivated successfully. Jan 24 00:50:05.802859 containerd[1593]: time="2026-01-24T00:50:05.802748231Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:50:05.804091 containerd[1593]: time="2026-01-24T00:50:05.803917532Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=0" Jan 24 00:50:05.804864 containerd[1593]: time="2026-01-24T00:50:05.804835932Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:50:05.807511 containerd[1593]: time="2026-01-24T00:50:05.807482774Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:50:05.808258 containerd[1593]: time="2026-01-24T00:50:05.808234204Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 448.228294ms" Jan 24 00:50:05.808342 containerd[1593]: time="2026-01-24T00:50:05.808325714Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Jan 24 00:50:05.809052 containerd[1593]: time="2026-01-24T00:50:05.809005484Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Jan 24 00:50:06.351459 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2640958535.mount: Deactivated successfully. Jan 24 00:50:08.567196 containerd[1593]: time="2026-01-24T00:50:08.567121072Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:50:08.568231 containerd[1593]: time="2026-01-24T00:50:08.568036183Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=61186606" Jan 24 00:50:08.568713 containerd[1593]: time="2026-01-24T00:50:08.568678403Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:50:08.570862 containerd[1593]: time="2026-01-24T00:50:08.570839764Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:50:08.571661 containerd[1593]: time="2026-01-24T00:50:08.571634515Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 2.762600671s" Jan 24 00:50:08.571713 containerd[1593]: time="2026-01-24T00:50:08.571662615Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Jan 24 00:50:11.314932 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:50:11.315084 systemd[1]: kubelet.service: Consumed 190ms CPU time, 112.2M memory peak. Jan 24 00:50:11.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:50:11.323403 kernel: audit: type=1130 audit(1769215811.314:291): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:50:11.319908 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:50:11.314000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:50:11.331831 kernel: audit: type=1131 audit(1769215811.314:292): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:50:11.359833 systemd[1]: Reload requested from client PID 2314 ('systemctl') (unit session-8.scope)... Jan 24 00:50:11.359852 systemd[1]: Reloading... Jan 24 00:50:11.524000 zram_generator::config[2364]: No configuration found. Jan 24 00:50:11.753398 systemd[1]: Reloading finished in 392 ms. Jan 24 00:50:11.776000 audit: BPF prog-id=67 op=LOAD Jan 24 00:50:11.779818 kernel: audit: type=1334 audit(1769215811.776:293): prog-id=67 op=LOAD Jan 24 00:50:11.779863 kernel: audit: type=1334 audit(1769215811.776:294): prog-id=59 op=UNLOAD Jan 24 00:50:11.776000 audit: BPF prog-id=59 op=UNLOAD Jan 24 00:50:11.779000 audit: BPF prog-id=68 op=LOAD Jan 24 00:50:11.779000 audit: BPF prog-id=60 op=UNLOAD Jan 24 00:50:11.779000 audit: BPF prog-id=69 op=LOAD Jan 24 00:50:11.779000 audit: BPF prog-id=70 op=LOAD Jan 24 00:50:11.779000 audit: BPF prog-id=61 op=UNLOAD Jan 24 00:50:11.779000 audit: BPF prog-id=62 op=UNLOAD Jan 24 00:50:11.779000 audit: BPF prog-id=71 op=LOAD Jan 24 00:50:11.779000 audit: BPF prog-id=58 op=UNLOAD Jan 24 00:50:11.781000 audit: BPF prog-id=72 op=LOAD Jan 24 00:50:11.781000 audit: BPF prog-id=55 op=UNLOAD Jan 24 00:50:11.783836 kernel: audit: type=1334 audit(1769215811.779:295): prog-id=68 op=LOAD Jan 24 00:50:11.783875 kernel: audit: type=1334 audit(1769215811.779:296): prog-id=60 op=UNLOAD Jan 24 00:50:11.783899 kernel: audit: type=1334 audit(1769215811.779:297): prog-id=69 op=LOAD Jan 24 00:50:11.783917 kernel: audit: type=1334 audit(1769215811.779:298): prog-id=70 op=LOAD Jan 24 00:50:11.783933 kernel: audit: type=1334 audit(1769215811.779:299): prog-id=61 op=UNLOAD Jan 24 00:50:11.783960 kernel: audit: type=1334 audit(1769215811.779:300): prog-id=62 op=UNLOAD Jan 24 00:50:11.781000 audit: BPF prog-id=73 op=LOAD Jan 24 00:50:11.781000 audit: BPF prog-id=74 op=LOAD Jan 24 00:50:11.781000 audit: BPF prog-id=56 op=UNLOAD Jan 24 00:50:11.781000 audit: BPF prog-id=57 op=UNLOAD Jan 24 00:50:11.781000 audit: BPF prog-id=75 op=LOAD Jan 24 00:50:11.781000 audit: BPF prog-id=43 op=UNLOAD Jan 24 00:50:11.781000 audit: BPF prog-id=76 op=LOAD Jan 24 00:50:11.781000 audit: BPF prog-id=77 op=LOAD Jan 24 00:50:11.781000 audit: BPF prog-id=44 op=UNLOAD Jan 24 00:50:11.781000 audit: BPF prog-id=45 op=UNLOAD Jan 24 00:50:11.783000 audit: BPF prog-id=78 op=LOAD Jan 24 00:50:11.783000 audit: BPF prog-id=48 op=UNLOAD Jan 24 00:50:11.783000 audit: BPF prog-id=79 op=LOAD Jan 24 00:50:11.783000 audit: BPF prog-id=80 op=LOAD Jan 24 00:50:11.783000 audit: BPF prog-id=46 op=UNLOAD Jan 24 00:50:11.783000 audit: BPF prog-id=47 op=UNLOAD Jan 24 00:50:11.783000 audit: BPF prog-id=81 op=LOAD Jan 24 00:50:11.783000 audit: BPF prog-id=49 op=UNLOAD Jan 24 00:50:11.783000 audit: BPF prog-id=82 op=LOAD Jan 24 00:50:11.783000 audit: BPF prog-id=83 op=LOAD Jan 24 00:50:11.783000 audit: BPF prog-id=50 op=UNLOAD Jan 24 00:50:11.785000 audit: BPF prog-id=51 op=UNLOAD Jan 24 00:50:11.785000 audit: BPF prog-id=84 op=LOAD Jan 24 00:50:11.785000 audit: BPF prog-id=52 op=UNLOAD Jan 24 00:50:11.785000 audit: BPF prog-id=85 op=LOAD Jan 24 00:50:11.785000 audit: BPF prog-id=86 op=LOAD Jan 24 00:50:11.785000 audit: BPF prog-id=53 op=UNLOAD Jan 24 00:50:11.785000 audit: BPF prog-id=54 op=UNLOAD Jan 24 00:50:11.788000 audit: BPF prog-id=87 op=LOAD Jan 24 00:50:11.788000 audit: BPF prog-id=66 op=UNLOAD Jan 24 00:50:11.791000 audit: BPF prog-id=88 op=LOAD Jan 24 00:50:11.791000 audit: BPF prog-id=63 op=UNLOAD Jan 24 00:50:11.791000 audit: BPF prog-id=89 op=LOAD Jan 24 00:50:11.791000 audit: BPF prog-id=90 op=LOAD Jan 24 00:50:11.791000 audit: BPF prog-id=64 op=UNLOAD Jan 24 00:50:11.791000 audit: BPF prog-id=65 op=UNLOAD Jan 24 00:50:11.813402 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 24 00:50:11.813934 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 24 00:50:11.814515 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:50:11.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 24 00:50:11.814645 systemd[1]: kubelet.service: Consumed 159ms CPU time, 98.4M memory peak. Jan 24 00:50:11.818066 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:50:11.986439 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:50:11.986000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:50:11.993400 (kubelet)[2416]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 24 00:50:12.029895 kubelet[2416]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 24 00:50:12.029895 kubelet[2416]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:50:12.030640 kubelet[2416]: I0124 00:50:12.029880 2416 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 24 00:50:12.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.239.50.209:22-164.52.24.180:58933 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:50:12.290076 systemd[1]: Started sshd@7-172.239.50.209:22-164.52.24.180:58933.service - OpenSSH per-connection server daemon (164.52.24.180:58933). Jan 24 00:50:12.318091 sshd[2423]: banner exchange: Connection from 164.52.24.180 port 58933: invalid format Jan 24 00:50:12.320128 systemd[1]: sshd@7-172.239.50.209:22-164.52.24.180:58933.service: Deactivated successfully. Jan 24 00:50:12.319000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.239.50.209:22-164.52.24.180:58933 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:50:12.568091 kubelet[2416]: I0124 00:50:12.567951 2416 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 24 00:50:12.568091 kubelet[2416]: I0124 00:50:12.567981 2416 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 24 00:50:12.571177 kubelet[2416]: I0124 00:50:12.571148 2416 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 24 00:50:12.571177 kubelet[2416]: I0124 00:50:12.571167 2416 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 24 00:50:12.571396 kubelet[2416]: I0124 00:50:12.571372 2416 server.go:956] "Client rotation is on, will bootstrap in background" Jan 24 00:50:12.579764 kubelet[2416]: E0124 00:50:12.579623 2416 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.239.50.209:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.239.50.209:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 24 00:50:12.579764 kubelet[2416]: I0124 00:50:12.579644 2416 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 24 00:50:12.583402 kubelet[2416]: I0124 00:50:12.583387 2416 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 24 00:50:12.587231 kubelet[2416]: I0124 00:50:12.587159 2416 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 24 00:50:12.589258 kubelet[2416]: I0124 00:50:12.589229 2416 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 24 00:50:12.589437 kubelet[2416]: I0124 00:50:12.589317 2416 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-239-50-209","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 24 00:50:12.589583 kubelet[2416]: I0124 00:50:12.589571 2416 topology_manager.go:138] "Creating topology manager with none policy" Jan 24 00:50:12.589640 kubelet[2416]: I0124 00:50:12.589631 2416 container_manager_linux.go:306] "Creating device plugin manager" Jan 24 00:50:12.589763 kubelet[2416]: I0124 00:50:12.589752 2416 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 24 00:50:12.591861 kubelet[2416]: I0124 00:50:12.591846 2416 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:50:12.592335 kubelet[2416]: I0124 00:50:12.592320 2416 kubelet.go:475] "Attempting to sync node with API server" Jan 24 00:50:12.592419 kubelet[2416]: I0124 00:50:12.592406 2416 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 24 00:50:12.592497 kubelet[2416]: I0124 00:50:12.592487 2416 kubelet.go:387] "Adding apiserver pod source" Jan 24 00:50:12.592560 kubelet[2416]: I0124 00:50:12.592551 2416 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 24 00:50:12.596123 kubelet[2416]: E0124 00:50:12.596101 2416 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.239.50.209:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-239-50-209&limit=500&resourceVersion=0\": dial tcp 172.239.50.209:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 24 00:50:12.596263 kubelet[2416]: E0124 00:50:12.596247 2416 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.239.50.209:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.239.50.209:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 24 00:50:12.596824 kubelet[2416]: I0124 00:50:12.596504 2416 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 24 00:50:12.597027 kubelet[2416]: I0124 00:50:12.597008 2416 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 24 00:50:12.597134 kubelet[2416]: I0124 00:50:12.597120 2416 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 24 00:50:12.597232 kubelet[2416]: W0124 00:50:12.597220 2416 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 24 00:50:12.600940 kubelet[2416]: I0124 00:50:12.600918 2416 server.go:1262] "Started kubelet" Jan 24 00:50:12.602152 kubelet[2416]: I0124 00:50:12.601683 2416 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 24 00:50:12.605492 kubelet[2416]: I0124 00:50:12.605463 2416 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 24 00:50:12.611388 kubelet[2416]: E0124 00:50:12.605919 2416 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.239.50.209:6443/api/v1/namespaces/default/events\": dial tcp 172.239.50.209:6443: connect: connection refused" event="&Event{ObjectMeta:{172-239-50-209.188d846869149616 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-239-50-209,UID:172-239-50-209,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-239-50-209,},FirstTimestamp:2026-01-24 00:50:12.600894998 +0000 UTC m=+0.603969483,LastTimestamp:2026-01-24 00:50:12.600894998 +0000 UTC m=+0.603969483,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-239-50-209,}" Jan 24 00:50:12.611388 kubelet[2416]: I0124 00:50:12.608282 2416 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 24 00:50:12.611388 kubelet[2416]: I0124 00:50:12.608320 2416 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 24 00:50:12.611388 kubelet[2416]: I0124 00:50:12.608723 2416 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 24 00:50:12.616198 kubelet[2416]: I0124 00:50:12.616175 2416 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 24 00:50:12.617000 audit[2437]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2437 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:50:12.617000 audit[2437]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fffdf49be20 a2=0 a3=0 items=0 ppid=2416 pid=2437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:12.617000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 24 00:50:12.620000 audit[2438]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2438 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:50:12.620000 audit[2438]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffca811de90 a2=0 a3=0 items=0 ppid=2416 pid=2438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:12.620000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 24 00:50:12.625000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.239.50.209:22-164.52.24.180:40631 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:50:12.626610 kubelet[2416]: I0124 00:50:12.626343 2416 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 24 00:50:12.626384 systemd[1]: Started sshd@8-172.239.50.209:22-164.52.24.180:40631.service - OpenSSH per-connection server daemon (164.52.24.180:40631). Jan 24 00:50:12.627951 kubelet[2416]: E0124 00:50:12.626910 2416 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-239-50-209\" not found" Jan 24 00:50:12.627951 kubelet[2416]: I0124 00:50:12.626951 2416 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 24 00:50:12.627951 kubelet[2416]: I0124 00:50:12.626990 2416 reconciler.go:29] "Reconciler: start to sync state" Jan 24 00:50:12.627951 kubelet[2416]: E0124 00:50:12.627214 2416 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.239.50.209:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-239-50-209?timeout=10s\": dial tcp 172.239.50.209:6443: connect: connection refused" interval="200ms" Jan 24 00:50:12.627000 audit[2440]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2440 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:50:12.627000 audit[2440]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffcb56d83e0 a2=0 a3=0 items=0 ppid=2416 pid=2440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:12.627000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 24 00:50:12.628989 kubelet[2416]: E0124 00:50:12.628970 2416 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.239.50.209:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.239.50.209:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 24 00:50:12.631864 kubelet[2416]: I0124 00:50:12.630933 2416 server.go:310] "Adding debug handlers to kubelet server" Jan 24 00:50:12.632000 audit[2443]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2443 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:50:12.632000 audit[2443]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffc1c086880 a2=0 a3=0 items=0 ppid=2416 pid=2443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:12.632000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 24 00:50:12.633888 kubelet[2416]: I0124 00:50:12.632911 2416 factory.go:223] Registration of the systemd container factory successfully Jan 24 00:50:12.633888 kubelet[2416]: I0124 00:50:12.632991 2416 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 24 00:50:12.635030 kubelet[2416]: I0124 00:50:12.635006 2416 factory.go:223] Registration of the containerd container factory successfully Jan 24 00:50:12.648000 audit[2446]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2446 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:50:12.648000 audit[2446]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffd58403c80 a2=0 a3=0 items=0 ppid=2416 pid=2446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:12.648000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F380000002D2D737263003132372E Jan 24 00:50:12.650929 kubelet[2416]: I0124 00:50:12.650892 2416 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 24 00:50:12.651000 audit[2449]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2449 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:50:12.651000 audit[2449]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff7c14ea10 a2=0 a3=0 items=0 ppid=2416 pid=2449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:12.651000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 24 00:50:12.652651 kubelet[2416]: I0124 00:50:12.652631 2416 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 24 00:50:12.652651 kubelet[2416]: I0124 00:50:12.652650 2416 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 24 00:50:12.652734 kubelet[2416]: I0124 00:50:12.652679 2416 kubelet.go:2427] "Starting kubelet main sync loop" Jan 24 00:50:12.652734 kubelet[2416]: E0124 00:50:12.652725 2416 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 24 00:50:12.653000 audit[2450]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2450 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:50:12.653000 audit[2450]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd7e04e040 a2=0 a3=0 items=0 ppid=2416 pid=2450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:12.653000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 24 00:50:12.655785 sshd[2441]: banner exchange: Connection from 164.52.24.180 port 40631: invalid format Jan 24 00:50:12.654000 audit[2451]: NETFILTER_CFG table=nat:49 family=2 entries=1 op=nft_register_chain pid=2451 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:50:12.654000 audit[2451]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe09a87690 a2=0 a3=0 items=0 ppid=2416 pid=2451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:12.654000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 24 00:50:12.656000 audit[2452]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_chain pid=2452 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:50:12.656000 audit[2452]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcede68070 a2=0 a3=0 items=0 ppid=2416 pid=2452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:12.656000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 24 00:50:12.658041 systemd[1]: sshd@8-172.239.50.209:22-164.52.24.180:40631.service: Deactivated successfully. Jan 24 00:50:12.657000 audit[2454]: NETFILTER_CFG table=mangle:51 family=10 entries=1 op=nft_register_chain pid=2454 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:50:12.657000 audit[2454]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc66640360 a2=0 a3=0 items=0 ppid=2416 pid=2454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:12.657000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 24 00:50:12.657000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.239.50.209:22-164.52.24.180:40631 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:50:12.660000 audit[2455]: NETFILTER_CFG table=nat:52 family=10 entries=1 op=nft_register_chain pid=2455 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:50:12.661565 kubelet[2416]: E0124 00:50:12.661024 2416 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 24 00:50:12.660000 audit[2455]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd218775a0 a2=0 a3=0 items=0 ppid=2416 pid=2455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:12.660000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 24 00:50:12.661998 kubelet[2416]: E0124 00:50:12.661932 2416 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.239.50.209:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.239.50.209:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 24 00:50:12.663000 audit[2458]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2458 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:50:12.663000 audit[2458]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcb5d1abd0 a2=0 a3=0 items=0 ppid=2416 pid=2458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:12.663000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 24 00:50:12.668122 kubelet[2416]: I0124 00:50:12.667902 2416 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 24 00:50:12.668122 kubelet[2416]: I0124 00:50:12.667914 2416 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 24 00:50:12.668122 kubelet[2416]: I0124 00:50:12.667930 2416 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:50:12.671819 kubelet[2416]: I0124 00:50:12.671684 2416 policy_none.go:49] "None policy: Start" Jan 24 00:50:12.671819 kubelet[2416]: I0124 00:50:12.671701 2416 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 24 00:50:12.671819 kubelet[2416]: I0124 00:50:12.671712 2416 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 24 00:50:12.672521 kubelet[2416]: I0124 00:50:12.672509 2416 policy_none.go:47] "Start" Jan 24 00:50:12.677829 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 24 00:50:12.689849 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 24 00:50:12.694396 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 24 00:50:12.703226 kubelet[2416]: E0124 00:50:12.703020 2416 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 24 00:50:12.704392 kubelet[2416]: I0124 00:50:12.703912 2416 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 24 00:50:12.704834 kubelet[2416]: I0124 00:50:12.704783 2416 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 24 00:50:12.705073 kubelet[2416]: I0124 00:50:12.705034 2416 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 24 00:50:12.707010 kubelet[2416]: E0124 00:50:12.706683 2416 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 24 00:50:12.707010 kubelet[2416]: E0124 00:50:12.706985 2416 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-239-50-209\" not found" Jan 24 00:50:12.765385 systemd[1]: Created slice kubepods-burstable-pod91bd88b4cf2ab038dfeb942e84904440.slice - libcontainer container kubepods-burstable-pod91bd88b4cf2ab038dfeb942e84904440.slice. Jan 24 00:50:12.777695 kubelet[2416]: E0124 00:50:12.777644 2416 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-50-209\" not found" node="172-239-50-209" Jan 24 00:50:12.780352 systemd[1]: Created slice kubepods-burstable-pod967450cde41cd6c69fec74b80c0fe93b.slice - libcontainer container kubepods-burstable-pod967450cde41cd6c69fec74b80c0fe93b.slice. Jan 24 00:50:12.796272 kubelet[2416]: E0124 00:50:12.796105 2416 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-50-209\" not found" node="172-239-50-209" Jan 24 00:50:12.799080 systemd[1]: Created slice kubepods-burstable-pod7a423460381f1bff931bf42b7a1127fd.slice - libcontainer container kubepods-burstable-pod7a423460381f1bff931bf42b7a1127fd.slice. Jan 24 00:50:12.801478 kubelet[2416]: E0124 00:50:12.801448 2416 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-50-209\" not found" node="172-239-50-209" Jan 24 00:50:12.806992 kubelet[2416]: I0124 00:50:12.806966 2416 kubelet_node_status.go:75] "Attempting to register node" node="172-239-50-209" Jan 24 00:50:12.807289 kubelet[2416]: E0124 00:50:12.807267 2416 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.239.50.209:6443/api/v1/nodes\": dial tcp 172.239.50.209:6443: connect: connection refused" node="172-239-50-209" Jan 24 00:50:12.827682 kubelet[2416]: I0124 00:50:12.827600 2416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/91bd88b4cf2ab038dfeb942e84904440-k8s-certs\") pod \"kube-apiserver-172-239-50-209\" (UID: \"91bd88b4cf2ab038dfeb942e84904440\") " pod="kube-system/kube-apiserver-172-239-50-209" Jan 24 00:50:12.827682 kubelet[2416]: I0124 00:50:12.827624 2416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/967450cde41cd6c69fec74b80c0fe93b-ca-certs\") pod \"kube-controller-manager-172-239-50-209\" (UID: \"967450cde41cd6c69fec74b80c0fe93b\") " pod="kube-system/kube-controller-manager-172-239-50-209" Jan 24 00:50:12.827682 kubelet[2416]: I0124 00:50:12.827638 2416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/967450cde41cd6c69fec74b80c0fe93b-k8s-certs\") pod \"kube-controller-manager-172-239-50-209\" (UID: \"967450cde41cd6c69fec74b80c0fe93b\") " pod="kube-system/kube-controller-manager-172-239-50-209" Jan 24 00:50:12.827682 kubelet[2416]: I0124 00:50:12.827651 2416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/967450cde41cd6c69fec74b80c0fe93b-kubeconfig\") pod \"kube-controller-manager-172-239-50-209\" (UID: \"967450cde41cd6c69fec74b80c0fe93b\") " pod="kube-system/kube-controller-manager-172-239-50-209" Jan 24 00:50:12.827682 kubelet[2416]: I0124 00:50:12.827664 2416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/967450cde41cd6c69fec74b80c0fe93b-usr-share-ca-certificates\") pod \"kube-controller-manager-172-239-50-209\" (UID: \"967450cde41cd6c69fec74b80c0fe93b\") " pod="kube-system/kube-controller-manager-172-239-50-209" Jan 24 00:50:12.828440 kubelet[2416]: I0124 00:50:12.828416 2416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/91bd88b4cf2ab038dfeb942e84904440-ca-certs\") pod \"kube-apiserver-172-239-50-209\" (UID: \"91bd88b4cf2ab038dfeb942e84904440\") " pod="kube-system/kube-apiserver-172-239-50-209" Jan 24 00:50:12.828480 kubelet[2416]: I0124 00:50:12.828442 2416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/91bd88b4cf2ab038dfeb942e84904440-usr-share-ca-certificates\") pod \"kube-apiserver-172-239-50-209\" (UID: \"91bd88b4cf2ab038dfeb942e84904440\") " pod="kube-system/kube-apiserver-172-239-50-209" Jan 24 00:50:12.828480 kubelet[2416]: I0124 00:50:12.828457 2416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/967450cde41cd6c69fec74b80c0fe93b-flexvolume-dir\") pod \"kube-controller-manager-172-239-50-209\" (UID: \"967450cde41cd6c69fec74b80c0fe93b\") " pod="kube-system/kube-controller-manager-172-239-50-209" Jan 24 00:50:12.828480 kubelet[2416]: I0124 00:50:12.828469 2416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7a423460381f1bff931bf42b7a1127fd-kubeconfig\") pod \"kube-scheduler-172-239-50-209\" (UID: \"7a423460381f1bff931bf42b7a1127fd\") " pod="kube-system/kube-scheduler-172-239-50-209" Jan 24 00:50:12.828598 kubelet[2416]: E0124 00:50:12.828571 2416 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.239.50.209:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-239-50-209?timeout=10s\": dial tcp 172.239.50.209:6443: connect: connection refused" interval="400ms" Jan 24 00:50:13.009162 kubelet[2416]: I0124 00:50:13.009134 2416 kubelet_node_status.go:75] "Attempting to register node" node="172-239-50-209" Jan 24 00:50:13.009499 kubelet[2416]: E0124 00:50:13.009469 2416 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.239.50.209:6443/api/v1/nodes\": dial tcp 172.239.50.209:6443: connect: connection refused" node="172-239-50-209" Jan 24 00:50:13.047295 kubelet[2416]: E0124 00:50:13.047152 2416 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.239.50.209:6443/api/v1/namespaces/default/events\": dial tcp 172.239.50.209:6443: connect: connection refused" event="&Event{ObjectMeta:{172-239-50-209.188d846869149616 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-239-50-209,UID:172-239-50-209,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-239-50-209,},FirstTimestamp:2026-01-24 00:50:12.600894998 +0000 UTC m=+0.603969483,LastTimestamp:2026-01-24 00:50:12.600894998 +0000 UTC m=+0.603969483,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-239-50-209,}" Jan 24 00:50:13.080746 kubelet[2416]: E0124 00:50:13.080670 2416 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 24 00:50:13.082202 containerd[1593]: time="2026-01-24T00:50:13.082053798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-239-50-209,Uid:91bd88b4cf2ab038dfeb942e84904440,Namespace:kube-system,Attempt:0,}" Jan 24 00:50:13.099527 kubelet[2416]: E0124 00:50:13.099462 2416 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 24 00:50:13.100431 containerd[1593]: time="2026-01-24T00:50:13.099997367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-239-50-209,Uid:967450cde41cd6c69fec74b80c0fe93b,Namespace:kube-system,Attempt:0,}" Jan 24 00:50:13.103616 kubelet[2416]: E0124 00:50:13.103590 2416 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 24 00:50:13.104820 containerd[1593]: time="2026-01-24T00:50:13.104195779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-239-50-209,Uid:7a423460381f1bff931bf42b7a1127fd,Namespace:kube-system,Attempt:0,}" Jan 24 00:50:13.230058 kubelet[2416]: E0124 00:50:13.230007 2416 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.239.50.209:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-239-50-209?timeout=10s\": dial tcp 172.239.50.209:6443: connect: connection refused" interval="800ms" Jan 24 00:50:13.412259 kubelet[2416]: I0124 00:50:13.411949 2416 kubelet_node_status.go:75] "Attempting to register node" node="172-239-50-209" Jan 24 00:50:13.412672 kubelet[2416]: E0124 00:50:13.412616 2416 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.239.50.209:6443/api/v1/nodes\": dial tcp 172.239.50.209:6443: connect: connection refused" node="172-239-50-209" Jan 24 00:50:13.535662 kubelet[2416]: E0124 00:50:13.535122 2416 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.239.50.209:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-239-50-209&limit=500&resourceVersion=0\": dial tcp 172.239.50.209:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 24 00:50:13.542001 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount274381471.mount: Deactivated successfully. Jan 24 00:50:13.545909 containerd[1593]: time="2026-01-24T00:50:13.545875980Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:50:13.547909 containerd[1593]: time="2026-01-24T00:50:13.547871431Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 24 00:50:13.548496 containerd[1593]: time="2026-01-24T00:50:13.548468421Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:50:13.549013 containerd[1593]: time="2026-01-24T00:50:13.548989742Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:50:13.549583 containerd[1593]: time="2026-01-24T00:50:13.549542472Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 24 00:50:13.550823 containerd[1593]: time="2026-01-24T00:50:13.550150142Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:50:13.550823 containerd[1593]: time="2026-01-24T00:50:13.550586962Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 24 00:50:13.552278 containerd[1593]: time="2026-01-24T00:50:13.552258563Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:50:13.555920 containerd[1593]: time="2026-01-24T00:50:13.553836614Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 469.908615ms" Jan 24 00:50:13.556241 containerd[1593]: time="2026-01-24T00:50:13.556221795Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 450.636465ms" Jan 24 00:50:13.557637 containerd[1593]: time="2026-01-24T00:50:13.557581246Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 455.842758ms" Jan 24 00:50:13.581612 containerd[1593]: time="2026-01-24T00:50:13.581560618Z" level=info msg="connecting to shim 3ea28b44c289752a77c1817f6f5a21d0ec159bd5fc33ebf35b9dab7bba1fff9c" address="unix:///run/containerd/s/9f1468fabcfa810e61541e992ef527e80a090af1278a3e500518f111123c3355" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:50:13.603099 containerd[1593]: time="2026-01-24T00:50:13.603056299Z" level=info msg="connecting to shim d97ee63acc47df44bc22dc122b6b9e8e2254e30c66832f91151f340c76d0298a" address="unix:///run/containerd/s/876ebf18e9d67aec1ee59aedb6615172fc9eae61de7aeb5cd6e5a32ce290c94d" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:50:13.613829 containerd[1593]: time="2026-01-24T00:50:13.613071794Z" level=info msg="connecting to shim 9921e5e48ff34f18896b45f4d292a2c5d35359086d62a203f11615d1614a8fb8" address="unix:///run/containerd/s/6898d12d0428813f8c84d4405c5e861b6b93b10a1fdea5553531ce8c7a532350" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:50:13.624988 systemd[1]: Started cri-containerd-3ea28b44c289752a77c1817f6f5a21d0ec159bd5fc33ebf35b9dab7bba1fff9c.scope - libcontainer container 3ea28b44c289752a77c1817f6f5a21d0ec159bd5fc33ebf35b9dab7bba1fff9c. Jan 24 00:50:13.654977 systemd[1]: Started cri-containerd-9921e5e48ff34f18896b45f4d292a2c5d35359086d62a203f11615d1614a8fb8.scope - libcontainer container 9921e5e48ff34f18896b45f4d292a2c5d35359086d62a203f11615d1614a8fb8. Jan 24 00:50:13.656500 systemd[1]: Started cri-containerd-d97ee63acc47df44bc22dc122b6b9e8e2254e30c66832f91151f340c76d0298a.scope - libcontainer container d97ee63acc47df44bc22dc122b6b9e8e2254e30c66832f91151f340c76d0298a. Jan 24 00:50:13.660000 audit: BPF prog-id=91 op=LOAD Jan 24 00:50:13.662000 audit: BPF prog-id=92 op=LOAD Jan 24 00:50:13.662000 audit[2493]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2475 pid=2493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:13.662000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365613238623434633238393735326137376331383137663666356132 Jan 24 00:50:13.662000 audit: BPF prog-id=92 op=UNLOAD Jan 24 00:50:13.662000 audit[2493]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2475 pid=2493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:13.662000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365613238623434633238393735326137376331383137663666356132 Jan 24 00:50:13.662000 audit: BPF prog-id=93 op=LOAD Jan 24 00:50:13.662000 audit[2493]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2475 pid=2493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:13.662000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365613238623434633238393735326137376331383137663666356132 Jan 24 00:50:13.662000 audit: BPF prog-id=94 op=LOAD Jan 24 00:50:13.662000 audit[2493]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2475 pid=2493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:13.662000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365613238623434633238393735326137376331383137663666356132 Jan 24 00:50:13.662000 audit: BPF prog-id=94 op=UNLOAD Jan 24 00:50:13.662000 audit[2493]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2475 pid=2493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:13.662000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365613238623434633238393735326137376331383137663666356132 Jan 24 00:50:13.662000 audit: BPF prog-id=93 op=UNLOAD Jan 24 00:50:13.662000 audit[2493]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2475 pid=2493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:13.662000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365613238623434633238393735326137376331383137663666356132 Jan 24 00:50:13.662000 audit: BPF prog-id=95 op=LOAD Jan 24 00:50:13.662000 audit[2493]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2475 pid=2493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:13.662000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365613238623434633238393735326137376331383137663666356132 Jan 24 00:50:13.675000 audit: BPF prog-id=96 op=LOAD Jan 24 00:50:13.675000 audit: BPF prog-id=97 op=LOAD Jan 24 00:50:13.675000 audit[2544]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2513 pid=2544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:13.675000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939323165356534386666333466313838393662343566346432393261 Jan 24 00:50:13.675000 audit: BPF prog-id=97 op=UNLOAD Jan 24 00:50:13.675000 audit[2544]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2513 pid=2544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:13.675000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939323165356534386666333466313838393662343566346432393261 Jan 24 00:50:13.675000 audit: BPF prog-id=98 op=LOAD Jan 24 00:50:13.675000 audit[2544]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2513 pid=2544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:13.675000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939323165356534386666333466313838393662343566346432393261 Jan 24 00:50:13.675000 audit: BPF prog-id=99 op=LOAD Jan 24 00:50:13.675000 audit[2544]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2513 pid=2544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:13.675000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939323165356534386666333466313838393662343566346432393261 Jan 24 00:50:13.676000 audit: BPF prog-id=99 op=UNLOAD Jan 24 00:50:13.676000 audit[2544]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2513 pid=2544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:13.676000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939323165356534386666333466313838393662343566346432393261 Jan 24 00:50:13.676000 audit: BPF prog-id=98 op=UNLOAD Jan 24 00:50:13.676000 audit[2544]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2513 pid=2544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:13.676000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939323165356534386666333466313838393662343566346432393261 Jan 24 00:50:13.676000 audit: BPF prog-id=100 op=LOAD Jan 24 00:50:13.676000 audit[2544]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2513 pid=2544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:13.676000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939323165356534386666333466313838393662343566346432393261 Jan 24 00:50:13.691000 audit: BPF prog-id=101 op=LOAD Jan 24 00:50:13.693000 audit: BPF prog-id=102 op=LOAD Jan 24 00:50:13.693000 audit[2526]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=2495 pid=2526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:13.693000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439376565363361636334376466343462633232646331323262366239 Jan 24 00:50:13.693000 audit: BPF prog-id=102 op=UNLOAD Jan 24 00:50:13.693000 audit[2526]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2495 pid=2526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:13.693000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439376565363361636334376466343462633232646331323262366239 Jan 24 00:50:13.693000 audit: BPF prog-id=103 op=LOAD Jan 24 00:50:13.693000 audit[2526]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=2495 pid=2526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:13.693000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439376565363361636334376466343462633232646331323262366239 Jan 24 00:50:13.694000 audit: BPF prog-id=104 op=LOAD Jan 24 00:50:13.694000 audit[2526]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=2495 pid=2526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:13.694000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439376565363361636334376466343462633232646331323262366239 Jan 24 00:50:13.696000 audit: BPF prog-id=104 op=UNLOAD Jan 24 00:50:13.696000 audit[2526]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2495 pid=2526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:13.696000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439376565363361636334376466343462633232646331323262366239 Jan 24 00:50:13.696000 audit: BPF prog-id=103 op=UNLOAD Jan 24 00:50:13.696000 audit[2526]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2495 pid=2526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:13.696000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439376565363361636334376466343462633232646331323262366239 Jan 24 00:50:13.696000 audit: BPF prog-id=105 op=LOAD Jan 24 00:50:13.696000 audit[2526]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=2495 pid=2526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:13.696000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439376565363361636334376466343462633232646331323262366239 Jan 24 00:50:13.741130 containerd[1593]: time="2026-01-24T00:50:13.740962338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-239-50-209,Uid:91bd88b4cf2ab038dfeb942e84904440,Namespace:kube-system,Attempt:0,} returns sandbox id \"3ea28b44c289752a77c1817f6f5a21d0ec159bd5fc33ebf35b9dab7bba1fff9c\"" Jan 24 00:50:13.746617 containerd[1593]: time="2026-01-24T00:50:13.746384450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-239-50-209,Uid:967450cde41cd6c69fec74b80c0fe93b,Namespace:kube-system,Attempt:0,} returns sandbox id \"9921e5e48ff34f18896b45f4d292a2c5d35359086d62a203f11615d1614a8fb8\"" Jan 24 00:50:13.746681 kubelet[2416]: E0124 00:50:13.746472 2416 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 24 00:50:13.747560 kubelet[2416]: E0124 00:50:13.747345 2416 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 24 00:50:13.752165 containerd[1593]: time="2026-01-24T00:50:13.752132723Z" level=info msg="CreateContainer within sandbox \"3ea28b44c289752a77c1817f6f5a21d0ec159bd5fc33ebf35b9dab7bba1fff9c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 24 00:50:13.753644 containerd[1593]: time="2026-01-24T00:50:13.752958224Z" level=info msg="CreateContainer within sandbox \"9921e5e48ff34f18896b45f4d292a2c5d35359086d62a203f11615d1614a8fb8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 24 00:50:13.763480 kubelet[2416]: E0124 00:50:13.763456 2416 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.239.50.209:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.239.50.209:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 24 00:50:13.765818 containerd[1593]: time="2026-01-24T00:50:13.765776700Z" level=info msg="Container d3124a1413e368e0f3ddea7cb615290c71b8536b26ebc528ea52077d50053a6b: CDI devices from CRI Config.CDIDevices: []" Jan 24 00:50:13.766373 containerd[1593]: time="2026-01-24T00:50:13.766257390Z" level=info msg="Container 971af589157299315f6e3b61f354c1d985176d22188f648a092af7e327f32e0b: CDI devices from CRI Config.CDIDevices: []" Jan 24 00:50:13.772886 containerd[1593]: time="2026-01-24T00:50:13.772866463Z" level=info msg="CreateContainer within sandbox \"9921e5e48ff34f18896b45f4d292a2c5d35359086d62a203f11615d1614a8fb8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d3124a1413e368e0f3ddea7cb615290c71b8536b26ebc528ea52077d50053a6b\"" Jan 24 00:50:13.774239 containerd[1593]: time="2026-01-24T00:50:13.774214074Z" level=info msg="StartContainer for \"d3124a1413e368e0f3ddea7cb615290c71b8536b26ebc528ea52077d50053a6b\"" Jan 24 00:50:13.775285 containerd[1593]: time="2026-01-24T00:50:13.775241455Z" level=info msg="CreateContainer within sandbox \"3ea28b44c289752a77c1817f6f5a21d0ec159bd5fc33ebf35b9dab7bba1fff9c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"971af589157299315f6e3b61f354c1d985176d22188f648a092af7e327f32e0b\"" Jan 24 00:50:13.775900 containerd[1593]: time="2026-01-24T00:50:13.775863045Z" level=info msg="StartContainer for \"971af589157299315f6e3b61f354c1d985176d22188f648a092af7e327f32e0b\"" Jan 24 00:50:13.776871 containerd[1593]: time="2026-01-24T00:50:13.776836165Z" level=info msg="connecting to shim d3124a1413e368e0f3ddea7cb615290c71b8536b26ebc528ea52077d50053a6b" address="unix:///run/containerd/s/6898d12d0428813f8c84d4405c5e861b6b93b10a1fdea5553531ce8c7a532350" protocol=ttrpc version=3 Jan 24 00:50:13.777747 containerd[1593]: time="2026-01-24T00:50:13.777728776Z" level=info msg="connecting to shim 971af589157299315f6e3b61f354c1d985176d22188f648a092af7e327f32e0b" address="unix:///run/containerd/s/9f1468fabcfa810e61541e992ef527e80a090af1278a3e500518f111123c3355" protocol=ttrpc version=3 Jan 24 00:50:13.780946 containerd[1593]: time="2026-01-24T00:50:13.780916048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-239-50-209,Uid:7a423460381f1bff931bf42b7a1127fd,Namespace:kube-system,Attempt:0,} returns sandbox id \"d97ee63acc47df44bc22dc122b6b9e8e2254e30c66832f91151f340c76d0298a\"" Jan 24 00:50:13.782058 kubelet[2416]: E0124 00:50:13.781925 2416 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 24 00:50:13.785973 containerd[1593]: time="2026-01-24T00:50:13.785953590Z" level=info msg="CreateContainer within sandbox \"d97ee63acc47df44bc22dc122b6b9e8e2254e30c66832f91151f340c76d0298a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 24 00:50:13.795821 containerd[1593]: time="2026-01-24T00:50:13.795761865Z" level=info msg="Container dd530dfc6ae962f9e86181cc09e71884a2c1d3004a40592f296a7c48a6c9d895: CDI devices from CRI Config.CDIDevices: []" Jan 24 00:50:13.807547 systemd[1]: Started cri-containerd-d3124a1413e368e0f3ddea7cb615290c71b8536b26ebc528ea52077d50053a6b.scope - libcontainer container d3124a1413e368e0f3ddea7cb615290c71b8536b26ebc528ea52077d50053a6b. Jan 24 00:50:13.810614 containerd[1593]: time="2026-01-24T00:50:13.810577462Z" level=info msg="CreateContainer within sandbox \"d97ee63acc47df44bc22dc122b6b9e8e2254e30c66832f91151f340c76d0298a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"dd530dfc6ae962f9e86181cc09e71884a2c1d3004a40592f296a7c48a6c9d895\"" Jan 24 00:50:13.811957 containerd[1593]: time="2026-01-24T00:50:13.811480453Z" level=info msg="StartContainer for \"dd530dfc6ae962f9e86181cc09e71884a2c1d3004a40592f296a7c48a6c9d895\"" Jan 24 00:50:13.815349 containerd[1593]: time="2026-01-24T00:50:13.815326855Z" level=info msg="connecting to shim dd530dfc6ae962f9e86181cc09e71884a2c1d3004a40592f296a7c48a6c9d895" address="unix:///run/containerd/s/876ebf18e9d67aec1ee59aedb6615172fc9eae61de7aeb5cd6e5a32ce290c94d" protocol=ttrpc version=3 Jan 24 00:50:13.817107 systemd[1]: Started cri-containerd-971af589157299315f6e3b61f354c1d985176d22188f648a092af7e327f32e0b.scope - libcontainer container 971af589157299315f6e3b61f354c1d985176d22188f648a092af7e327f32e0b. Jan 24 00:50:13.843986 kubelet[2416]: E0124 00:50:13.843950 2416 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.239.50.209:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.239.50.209:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 24 00:50:13.843000 audit: BPF prog-id=106 op=LOAD Jan 24 00:50:13.843000 audit: BPF prog-id=107 op=LOAD Jan 24 00:50:13.843000 audit[2605]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2513 pid=2605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:13.843000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433313234613134313365333638653066336464656137636236313532 Jan 24 00:50:13.844000 audit: BPF prog-id=107 op=UNLOAD Jan 24 00:50:13.844000 audit[2605]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2513 pid=2605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:13.844000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433313234613134313365333638653066336464656137636236313532 Jan 24 00:50:13.844000 audit: BPF prog-id=108 op=LOAD Jan 24 00:50:13.844000 audit[2605]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2513 pid=2605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:13.844000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433313234613134313365333638653066336464656137636236313532 Jan 24 00:50:13.844000 audit: BPF prog-id=109 op=LOAD Jan 24 00:50:13.844000 audit[2605]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2513 pid=2605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:13.844000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433313234613134313365333638653066336464656137636236313532 Jan 24 00:50:13.844000 audit: BPF prog-id=109 op=UNLOAD Jan 24 00:50:13.844000 audit[2605]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2513 pid=2605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:13.844000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433313234613134313365333638653066336464656137636236313532 Jan 24 00:50:13.844000 audit: BPF prog-id=108 op=UNLOAD Jan 24 00:50:13.844000 audit[2605]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2513 pid=2605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:13.844000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433313234613134313365333638653066336464656137636236313532 Jan 24 00:50:13.845000 audit: BPF prog-id=110 op=LOAD Jan 24 00:50:13.845000 audit[2605]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2513 pid=2605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:13.845000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433313234613134313365333638653066336464656137636236313532 Jan 24 00:50:13.851057 systemd[1]: Started cri-containerd-dd530dfc6ae962f9e86181cc09e71884a2c1d3004a40592f296a7c48a6c9d895.scope - libcontainer container dd530dfc6ae962f9e86181cc09e71884a2c1d3004a40592f296a7c48a6c9d895. Jan 24 00:50:13.852000 audit: BPF prog-id=111 op=LOAD Jan 24 00:50:13.853000 audit: BPF prog-id=112 op=LOAD Jan 24 00:50:13.853000 audit[2606]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000174238 a2=98 a3=0 items=0 ppid=2475 pid=2606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:13.853000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937316166353839313537323939333135663665336236316633353463 Jan 24 00:50:13.853000 audit: BPF prog-id=112 op=UNLOAD Jan 24 00:50:13.853000 audit[2606]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2475 pid=2606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:13.853000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937316166353839313537323939333135663665336236316633353463 Jan 24 00:50:13.853000 audit: BPF prog-id=113 op=LOAD Jan 24 00:50:13.853000 audit[2606]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000174488 a2=98 a3=0 items=0 ppid=2475 pid=2606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:13.853000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937316166353839313537323939333135663665336236316633353463 Jan 24 00:50:13.854000 audit: BPF prog-id=114 op=LOAD Jan 24 00:50:13.854000 audit[2606]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000174218 a2=98 a3=0 items=0 ppid=2475 pid=2606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:13.854000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937316166353839313537323939333135663665336236316633353463 Jan 24 00:50:13.854000 audit: BPF prog-id=114 op=UNLOAD Jan 24 00:50:13.854000 audit[2606]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2475 pid=2606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:13.854000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937316166353839313537323939333135663665336236316633353463 Jan 24 00:50:13.854000 audit: BPF prog-id=113 op=UNLOAD Jan 24 00:50:13.854000 audit[2606]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2475 pid=2606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:13.854000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937316166353839313537323939333135663665336236316633353463 Jan 24 00:50:13.854000 audit: BPF prog-id=115 op=LOAD Jan 24 00:50:13.854000 audit[2606]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001746e8 a2=98 a3=0 items=0 ppid=2475 pid=2606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:13.854000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937316166353839313537323939333135663665336236316633353463 Jan 24 00:50:13.885000 audit: BPF prog-id=116 op=LOAD Jan 24 00:50:13.886000 audit: BPF prog-id=117 op=LOAD Jan 24 00:50:13.886000 audit[2643]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2495 pid=2643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:13.886000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464353330646663366165393632663965383631383163633039653731 Jan 24 00:50:13.886000 audit: BPF prog-id=117 op=UNLOAD Jan 24 00:50:13.886000 audit[2643]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2495 pid=2643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:13.886000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464353330646663366165393632663965383631383163633039653731 Jan 24 00:50:13.887000 audit: BPF prog-id=118 op=LOAD Jan 24 00:50:13.887000 audit[2643]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2495 pid=2643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:13.887000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464353330646663366165393632663965383631383163633039653731 Jan 24 00:50:13.887000 audit: BPF prog-id=119 op=LOAD Jan 24 00:50:13.887000 audit[2643]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2495 pid=2643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:13.887000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464353330646663366165393632663965383631383163633039653731 Jan 24 00:50:13.887000 audit: BPF prog-id=119 op=UNLOAD Jan 24 00:50:13.887000 audit[2643]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2495 pid=2643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:13.887000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464353330646663366165393632663965383631383163633039653731 Jan 24 00:50:13.887000 audit: BPF prog-id=118 op=UNLOAD Jan 24 00:50:13.887000 audit[2643]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2495 pid=2643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:13.887000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464353330646663366165393632663965383631383163633039653731 Jan 24 00:50:13.888000 audit: BPF prog-id=120 op=LOAD Jan 24 00:50:13.888000 audit[2643]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2495 pid=2643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:13.888000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464353330646663366165393632663965383631383163633039653731 Jan 24 00:50:13.894700 containerd[1593]: time="2026-01-24T00:50:13.894675774Z" level=info msg="StartContainer for \"d3124a1413e368e0f3ddea7cb615290c71b8536b26ebc528ea52077d50053a6b\" returns successfully" Jan 24 00:50:13.928843 containerd[1593]: time="2026-01-24T00:50:13.927878581Z" level=info msg="StartContainer for \"971af589157299315f6e3b61f354c1d985176d22188f648a092af7e327f32e0b\" returns successfully" Jan 24 00:50:13.970027 containerd[1593]: time="2026-01-24T00:50:13.969951862Z" level=info msg="StartContainer for \"dd530dfc6ae962f9e86181cc09e71884a2c1d3004a40592f296a7c48a6c9d895\" returns successfully" Jan 24 00:50:14.217178 kubelet[2416]: I0124 00:50:14.216435 2416 kubelet_node_status.go:75] "Attempting to register node" node="172-239-50-209" Jan 24 00:50:14.686548 kubelet[2416]: E0124 00:50:14.686317 2416 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-50-209\" not found" node="172-239-50-209" Jan 24 00:50:14.687314 kubelet[2416]: E0124 00:50:14.687128 2416 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 24 00:50:14.695348 kubelet[2416]: E0124 00:50:14.695329 2416 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-50-209\" not found" node="172-239-50-209" Jan 24 00:50:14.696201 kubelet[2416]: E0124 00:50:14.696155 2416 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 24 00:50:14.697167 kubelet[2416]: E0124 00:50:14.696980 2416 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-50-209\" not found" node="172-239-50-209" Jan 24 00:50:14.697167 kubelet[2416]: E0124 00:50:14.697111 2416 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 24 00:50:15.699473 kubelet[2416]: E0124 00:50:15.699424 2416 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-50-209\" not found" node="172-239-50-209" Jan 24 00:50:15.701191 kubelet[2416]: E0124 00:50:15.699440 2416 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-50-209\" not found" node="172-239-50-209" Jan 24 00:50:15.701191 kubelet[2416]: E0124 00:50:15.700233 2416 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 24 00:50:15.701191 kubelet[2416]: E0124 00:50:15.700909 2416 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 24 00:50:15.726039 kubelet[2416]: E0124 00:50:15.725978 2416 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-239-50-209\" not found" node="172-239-50-209" Jan 24 00:50:15.860979 kubelet[2416]: I0124 00:50:15.860934 2416 kubelet_node_status.go:78] "Successfully registered node" node="172-239-50-209" Jan 24 00:50:15.929687 kubelet[2416]: I0124 00:50:15.929650 2416 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-239-50-209" Jan 24 00:50:15.934158 kubelet[2416]: E0124 00:50:15.934135 2416 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-239-50-209\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-239-50-209" Jan 24 00:50:15.934158 kubelet[2416]: I0124 00:50:15.934154 2416 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-239-50-209" Jan 24 00:50:15.935246 kubelet[2416]: E0124 00:50:15.935209 2416 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-239-50-209\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-172-239-50-209" Jan 24 00:50:15.935246 kubelet[2416]: I0124 00:50:15.935226 2416 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-239-50-209" Jan 24 00:50:15.936469 kubelet[2416]: E0124 00:50:15.936423 2416 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-239-50-209\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-239-50-209" Jan 24 00:50:16.598050 kubelet[2416]: I0124 00:50:16.598015 2416 apiserver.go:52] "Watching apiserver" Jan 24 00:50:16.627917 kubelet[2416]: I0124 00:50:16.627873 2416 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 24 00:50:17.949648 systemd[1]: Reload requested from client PID 2709 ('systemctl') (unit session-8.scope)... Jan 24 00:50:17.949673 systemd[1]: Reloading... Jan 24 00:50:18.050250 kubelet[2416]: I0124 00:50:18.050074 2416 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-239-50-209" Jan 24 00:50:18.056827 zram_generator::config[2756]: No configuration found. Jan 24 00:50:18.062579 kubelet[2416]: E0124 00:50:18.062523 2416 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 24 00:50:18.334184 systemd[1]: Reloading finished in 383 ms. Jan 24 00:50:18.365851 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:50:18.384877 systemd[1]: kubelet.service: Deactivated successfully. Jan 24 00:50:18.385285 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:50:18.384000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:50:18.385848 kernel: kauditd_printk_skb: 214 callbacks suppressed Jan 24 00:50:18.385898 kernel: audit: type=1131 audit(1769215818.384:407): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:50:18.388852 systemd[1]: kubelet.service: Consumed 994ms CPU time, 123.5M memory peak. Jan 24 00:50:18.392600 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:50:18.393000 audit: BPF prog-id=121 op=LOAD Jan 24 00:50:18.393000 audit: BPF prog-id=71 op=UNLOAD Jan 24 00:50:18.396000 audit: BPF prog-id=122 op=LOAD Jan 24 00:50:18.396000 audit: BPF prog-id=72 op=UNLOAD Jan 24 00:50:18.396000 audit: BPF prog-id=123 op=LOAD Jan 24 00:50:18.396000 audit: BPF prog-id=124 op=LOAD Jan 24 00:50:18.396000 audit: BPF prog-id=73 op=UNLOAD Jan 24 00:50:18.396000 audit: BPF prog-id=74 op=UNLOAD Jan 24 00:50:18.397000 audit: BPF prog-id=125 op=LOAD Jan 24 00:50:18.397000 audit: BPF prog-id=126 op=LOAD Jan 24 00:50:18.397000 audit: BPF prog-id=79 op=UNLOAD Jan 24 00:50:18.398861 kernel: audit: type=1334 audit(1769215818.393:408): prog-id=121 op=LOAD Jan 24 00:50:18.398956 kernel: audit: type=1334 audit(1769215818.393:409): prog-id=71 op=UNLOAD Jan 24 00:50:18.398980 kernel: audit: type=1334 audit(1769215818.396:410): prog-id=122 op=LOAD Jan 24 00:50:18.399001 kernel: audit: type=1334 audit(1769215818.396:411): prog-id=72 op=UNLOAD Jan 24 00:50:18.399022 kernel: audit: type=1334 audit(1769215818.396:412): prog-id=123 op=LOAD Jan 24 00:50:18.399040 kernel: audit: type=1334 audit(1769215818.396:413): prog-id=124 op=LOAD Jan 24 00:50:18.399058 kernel: audit: type=1334 audit(1769215818.396:414): prog-id=73 op=UNLOAD Jan 24 00:50:18.399079 kernel: audit: type=1334 audit(1769215818.396:415): prog-id=74 op=UNLOAD Jan 24 00:50:18.399098 kernel: audit: type=1334 audit(1769215818.397:416): prog-id=125 op=LOAD Jan 24 00:50:18.397000 audit: BPF prog-id=80 op=UNLOAD Jan 24 00:50:18.398000 audit: BPF prog-id=127 op=LOAD Jan 24 00:50:18.398000 audit: BPF prog-id=87 op=UNLOAD Jan 24 00:50:18.402000 audit: BPF prog-id=128 op=LOAD Jan 24 00:50:18.402000 audit: BPF prog-id=68 op=UNLOAD Jan 24 00:50:18.402000 audit: BPF prog-id=129 op=LOAD Jan 24 00:50:18.402000 audit: BPF prog-id=130 op=LOAD Jan 24 00:50:18.402000 audit: BPF prog-id=69 op=UNLOAD Jan 24 00:50:18.402000 audit: BPF prog-id=70 op=UNLOAD Jan 24 00:50:18.402000 audit: BPF prog-id=131 op=LOAD Jan 24 00:50:18.402000 audit: BPF prog-id=88 op=UNLOAD Jan 24 00:50:18.402000 audit: BPF prog-id=132 op=LOAD Jan 24 00:50:18.402000 audit: BPF prog-id=133 op=LOAD Jan 24 00:50:18.402000 audit: BPF prog-id=89 op=UNLOAD Jan 24 00:50:18.402000 audit: BPF prog-id=90 op=UNLOAD Jan 24 00:50:18.404000 audit: BPF prog-id=134 op=LOAD Jan 24 00:50:18.404000 audit: BPF prog-id=84 op=UNLOAD Jan 24 00:50:18.404000 audit: BPF prog-id=135 op=LOAD Jan 24 00:50:18.404000 audit: BPF prog-id=136 op=LOAD Jan 24 00:50:18.404000 audit: BPF prog-id=85 op=UNLOAD Jan 24 00:50:18.406000 audit: BPF prog-id=86 op=UNLOAD Jan 24 00:50:18.407000 audit: BPF prog-id=137 op=LOAD Jan 24 00:50:18.407000 audit: BPF prog-id=78 op=UNLOAD Jan 24 00:50:18.407000 audit: BPF prog-id=138 op=LOAD Jan 24 00:50:18.407000 audit: BPF prog-id=67 op=UNLOAD Jan 24 00:50:18.410000 audit: BPF prog-id=139 op=LOAD Jan 24 00:50:18.410000 audit: BPF prog-id=81 op=UNLOAD Jan 24 00:50:18.410000 audit: BPF prog-id=140 op=LOAD Jan 24 00:50:18.410000 audit: BPF prog-id=141 op=LOAD Jan 24 00:50:18.410000 audit: BPF prog-id=82 op=UNLOAD Jan 24 00:50:18.410000 audit: BPF prog-id=83 op=UNLOAD Jan 24 00:50:18.411000 audit: BPF prog-id=142 op=LOAD Jan 24 00:50:18.411000 audit: BPF prog-id=75 op=UNLOAD Jan 24 00:50:18.411000 audit: BPF prog-id=143 op=LOAD Jan 24 00:50:18.411000 audit: BPF prog-id=144 op=LOAD Jan 24 00:50:18.411000 audit: BPF prog-id=76 op=UNLOAD Jan 24 00:50:18.411000 audit: BPF prog-id=77 op=UNLOAD Jan 24 00:50:18.584000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:50:18.584328 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:50:18.595253 (kubelet)[2807]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 24 00:50:18.635294 kubelet[2807]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 24 00:50:18.637825 kubelet[2807]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:50:18.637978 kubelet[2807]: I0124 00:50:18.637949 2807 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 24 00:50:18.644665 kubelet[2807]: I0124 00:50:18.644636 2807 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 24 00:50:18.644665 kubelet[2807]: I0124 00:50:18.644666 2807 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 24 00:50:18.644750 kubelet[2807]: I0124 00:50:18.644692 2807 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 24 00:50:18.644750 kubelet[2807]: I0124 00:50:18.644698 2807 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 24 00:50:18.644891 kubelet[2807]: I0124 00:50:18.644870 2807 server.go:956] "Client rotation is on, will bootstrap in background" Jan 24 00:50:18.645845 kubelet[2807]: I0124 00:50:18.645817 2807 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 24 00:50:18.647824 kubelet[2807]: I0124 00:50:18.647258 2807 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 24 00:50:18.650205 kubelet[2807]: I0124 00:50:18.650181 2807 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 24 00:50:18.653552 kubelet[2807]: I0124 00:50:18.653510 2807 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 24 00:50:18.653915 kubelet[2807]: I0124 00:50:18.653892 2807 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 24 00:50:18.654069 kubelet[2807]: I0124 00:50:18.653967 2807 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-239-50-209","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 24 00:50:18.654183 kubelet[2807]: I0124 00:50:18.654173 2807 topology_manager.go:138] "Creating topology manager with none policy" Jan 24 00:50:18.654236 kubelet[2807]: I0124 00:50:18.654227 2807 container_manager_linux.go:306] "Creating device plugin manager" Jan 24 00:50:18.654293 kubelet[2807]: I0124 00:50:18.654285 2807 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 24 00:50:18.655064 kubelet[2807]: I0124 00:50:18.655052 2807 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:50:18.655293 kubelet[2807]: I0124 00:50:18.655282 2807 kubelet.go:475] "Attempting to sync node with API server" Jan 24 00:50:18.655351 kubelet[2807]: I0124 00:50:18.655342 2807 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 24 00:50:18.655407 kubelet[2807]: I0124 00:50:18.655399 2807 kubelet.go:387] "Adding apiserver pod source" Jan 24 00:50:18.655464 kubelet[2807]: I0124 00:50:18.655456 2807 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 24 00:50:18.665404 kubelet[2807]: I0124 00:50:18.665386 2807 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 24 00:50:18.668324 kubelet[2807]: I0124 00:50:18.668310 2807 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 24 00:50:18.668407 kubelet[2807]: I0124 00:50:18.668397 2807 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 24 00:50:18.672244 kubelet[2807]: I0124 00:50:18.671112 2807 server.go:1262] "Started kubelet" Jan 24 00:50:18.672244 kubelet[2807]: I0124 00:50:18.672129 2807 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 24 00:50:18.673644 kubelet[2807]: I0124 00:50:18.672566 2807 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 24 00:50:18.674185 kubelet[2807]: I0124 00:50:18.673876 2807 server.go:310] "Adding debug handlers to kubelet server" Jan 24 00:50:18.677848 kubelet[2807]: I0124 00:50:18.677783 2807 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 24 00:50:18.677927 kubelet[2807]: I0124 00:50:18.677914 2807 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 24 00:50:18.678123 kubelet[2807]: I0124 00:50:18.678110 2807 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 24 00:50:18.678362 kubelet[2807]: I0124 00:50:18.678347 2807 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 24 00:50:18.679004 kubelet[2807]: I0124 00:50:18.678979 2807 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 24 00:50:18.681682 kubelet[2807]: I0124 00:50:18.681668 2807 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 24 00:50:18.682158 kubelet[2807]: I0124 00:50:18.682144 2807 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 24 00:50:18.682455 kubelet[2807]: I0124 00:50:18.682431 2807 reconciler.go:29] "Reconciler: start to sync state" Jan 24 00:50:18.684926 kubelet[2807]: I0124 00:50:18.684911 2807 factory.go:223] Registration of the systemd container factory successfully Jan 24 00:50:18.685879 kubelet[2807]: I0124 00:50:18.685860 2807 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 24 00:50:18.688146 kubelet[2807]: E0124 00:50:18.688099 2807 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 24 00:50:18.688314 kubelet[2807]: I0124 00:50:18.688190 2807 factory.go:223] Registration of the containerd container factory successfully Jan 24 00:50:18.696046 kubelet[2807]: I0124 00:50:18.696029 2807 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 24 00:50:18.696111 kubelet[2807]: I0124 00:50:18.696102 2807 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 24 00:50:18.696173 kubelet[2807]: I0124 00:50:18.696164 2807 kubelet.go:2427] "Starting kubelet main sync loop" Jan 24 00:50:18.696278 kubelet[2807]: E0124 00:50:18.696249 2807 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 24 00:50:18.739489 kubelet[2807]: I0124 00:50:18.739460 2807 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 24 00:50:18.739489 kubelet[2807]: I0124 00:50:18.739476 2807 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 24 00:50:18.739489 kubelet[2807]: I0124 00:50:18.739493 2807 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:50:18.739656 kubelet[2807]: I0124 00:50:18.739600 2807 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 24 00:50:18.739656 kubelet[2807]: I0124 00:50:18.739608 2807 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 24 00:50:18.739656 kubelet[2807]: I0124 00:50:18.739624 2807 policy_none.go:49] "None policy: Start" Jan 24 00:50:18.739656 kubelet[2807]: I0124 00:50:18.739632 2807 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 24 00:50:18.739656 kubelet[2807]: I0124 00:50:18.739642 2807 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 24 00:50:18.739768 kubelet[2807]: I0124 00:50:18.739718 2807 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Jan 24 00:50:18.739768 kubelet[2807]: I0124 00:50:18.739725 2807 policy_none.go:47] "Start" Jan 24 00:50:18.745443 kubelet[2807]: E0124 00:50:18.745414 2807 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 24 00:50:18.745593 kubelet[2807]: I0124 00:50:18.745568 2807 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 24 00:50:18.745628 kubelet[2807]: I0124 00:50:18.745585 2807 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 24 00:50:18.748001 kubelet[2807]: I0124 00:50:18.747407 2807 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 24 00:50:18.750498 kubelet[2807]: E0124 00:50:18.750476 2807 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 24 00:50:18.797394 kubelet[2807]: I0124 00:50:18.797352 2807 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-239-50-209" Jan 24 00:50:18.797692 kubelet[2807]: I0124 00:50:18.797671 2807 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-239-50-209" Jan 24 00:50:18.799255 kubelet[2807]: I0124 00:50:18.799239 2807 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-239-50-209" Jan 24 00:50:18.802618 kubelet[2807]: E0124 00:50:18.802563 2807 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-239-50-209\" already exists" pod="kube-system/kube-apiserver-172-239-50-209" Jan 24 00:50:18.850550 kubelet[2807]: I0124 00:50:18.850303 2807 kubelet_node_status.go:75] "Attempting to register node" node="172-239-50-209" Jan 24 00:50:18.857233 kubelet[2807]: I0124 00:50:18.857212 2807 kubelet_node_status.go:124] "Node was previously registered" node="172-239-50-209" Jan 24 00:50:18.857303 kubelet[2807]: I0124 00:50:18.857268 2807 kubelet_node_status.go:78] "Successfully registered node" node="172-239-50-209" Jan 24 00:50:18.984229 kubelet[2807]: I0124 00:50:18.984033 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7a423460381f1bff931bf42b7a1127fd-kubeconfig\") pod \"kube-scheduler-172-239-50-209\" (UID: \"7a423460381f1bff931bf42b7a1127fd\") " pod="kube-system/kube-scheduler-172-239-50-209" Jan 24 00:50:18.984229 kubelet[2807]: I0124 00:50:18.984065 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/91bd88b4cf2ab038dfeb942e84904440-ca-certs\") pod \"kube-apiserver-172-239-50-209\" (UID: \"91bd88b4cf2ab038dfeb942e84904440\") " pod="kube-system/kube-apiserver-172-239-50-209" Jan 24 00:50:18.984229 kubelet[2807]: I0124 00:50:18.984083 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/91bd88b4cf2ab038dfeb942e84904440-usr-share-ca-certificates\") pod \"kube-apiserver-172-239-50-209\" (UID: \"91bd88b4cf2ab038dfeb942e84904440\") " pod="kube-system/kube-apiserver-172-239-50-209" Jan 24 00:50:18.984229 kubelet[2807]: I0124 00:50:18.984099 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/967450cde41cd6c69fec74b80c0fe93b-kubeconfig\") pod \"kube-controller-manager-172-239-50-209\" (UID: \"967450cde41cd6c69fec74b80c0fe93b\") " pod="kube-system/kube-controller-manager-172-239-50-209" Jan 24 00:50:18.984229 kubelet[2807]: I0124 00:50:18.984113 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/967450cde41cd6c69fec74b80c0fe93b-usr-share-ca-certificates\") pod \"kube-controller-manager-172-239-50-209\" (UID: \"967450cde41cd6c69fec74b80c0fe93b\") " pod="kube-system/kube-controller-manager-172-239-50-209" Jan 24 00:50:18.984445 kubelet[2807]: I0124 00:50:18.984126 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/91bd88b4cf2ab038dfeb942e84904440-k8s-certs\") pod \"kube-apiserver-172-239-50-209\" (UID: \"91bd88b4cf2ab038dfeb942e84904440\") " pod="kube-system/kube-apiserver-172-239-50-209" Jan 24 00:50:18.984445 kubelet[2807]: I0124 00:50:18.984138 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/967450cde41cd6c69fec74b80c0fe93b-ca-certs\") pod \"kube-controller-manager-172-239-50-209\" (UID: \"967450cde41cd6c69fec74b80c0fe93b\") " pod="kube-system/kube-controller-manager-172-239-50-209" Jan 24 00:50:18.984445 kubelet[2807]: I0124 00:50:18.984151 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/967450cde41cd6c69fec74b80c0fe93b-flexvolume-dir\") pod \"kube-controller-manager-172-239-50-209\" (UID: \"967450cde41cd6c69fec74b80c0fe93b\") " pod="kube-system/kube-controller-manager-172-239-50-209" Jan 24 00:50:18.984445 kubelet[2807]: I0124 00:50:18.984164 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/967450cde41cd6c69fec74b80c0fe93b-k8s-certs\") pod \"kube-controller-manager-172-239-50-209\" (UID: \"967450cde41cd6c69fec74b80c0fe93b\") " pod="kube-system/kube-controller-manager-172-239-50-209" Jan 24 00:50:19.103375 kubelet[2807]: E0124 00:50:19.102588 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 24 00:50:19.103375 kubelet[2807]: E0124 00:50:19.102936 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 24 00:50:19.103493 kubelet[2807]: E0124 00:50:19.103379 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 24 00:50:19.657272 kubelet[2807]: I0124 00:50:19.657229 2807 apiserver.go:52] "Watching apiserver" Jan 24 00:50:19.683286 kubelet[2807]: I0124 00:50:19.683053 2807 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 24 00:50:19.711562 kubelet[2807]: E0124 00:50:19.711210 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 24 00:50:19.711562 kubelet[2807]: E0124 00:50:19.711515 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 24 00:50:19.711697 kubelet[2807]: I0124 00:50:19.711621 2807 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-239-50-209" Jan 24 00:50:19.717332 kubelet[2807]: E0124 00:50:19.717304 2807 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-239-50-209\" already exists" pod="kube-system/kube-apiserver-172-239-50-209" Jan 24 00:50:19.717707 kubelet[2807]: E0124 00:50:19.717684 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 24 00:50:19.729260 kubelet[2807]: I0124 00:50:19.729026 2807 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-239-50-209" podStartSLOduration=1.72901409 podStartE2EDuration="1.72901409s" podCreationTimestamp="2026-01-24 00:50:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:50:19.72900839 +0000 UTC m=+1.129720206" watchObservedRunningTime="2026-01-24 00:50:19.72901409 +0000 UTC m=+1.129725906" Jan 24 00:50:19.746313 kubelet[2807]: I0124 00:50:19.746273 2807 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-239-50-209" podStartSLOduration=1.746247758 podStartE2EDuration="1.746247758s" podCreationTimestamp="2026-01-24 00:50:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:50:19.739903055 +0000 UTC m=+1.140614881" watchObservedRunningTime="2026-01-24 00:50:19.746247758 +0000 UTC m=+1.146959574" Jan 24 00:50:19.752977 kubelet[2807]: I0124 00:50:19.752942 2807 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-239-50-209" podStartSLOduration=1.752932191 podStartE2EDuration="1.752932191s" podCreationTimestamp="2026-01-24 00:50:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:50:19.746742268 +0000 UTC m=+1.147454084" watchObservedRunningTime="2026-01-24 00:50:19.752932191 +0000 UTC m=+1.153644017" Jan 24 00:50:20.716885 kubelet[2807]: E0124 00:50:20.716834 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 24 00:50:20.717297 kubelet[2807]: E0124 00:50:20.717158 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 24 00:50:21.811048 kubelet[2807]: E0124 00:50:21.810991 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 24 00:50:22.485533 kubelet[2807]: E0124 00:50:22.485504 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 24 00:50:22.752547 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 24 00:50:22.755000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:50:22.764000 audit: BPF prog-id=131 op=UNLOAD Jan 24 00:50:23.208854 kubelet[2807]: I0124 00:50:23.208090 2807 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 24 00:50:23.210092 containerd[1593]: time="2026-01-24T00:50:23.208625338Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 24 00:50:23.210403 kubelet[2807]: I0124 00:50:23.209444 2807 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 24 00:50:23.896279 systemd[1]: Created slice kubepods-besteffort-podb06fb7fd_fc6f_45ef_b575_a0938cd2ef8b.slice - libcontainer container kubepods-besteffort-podb06fb7fd_fc6f_45ef_b575_a0938cd2ef8b.slice. Jan 24 00:50:23.915164 kubelet[2807]: I0124 00:50:23.915100 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b06fb7fd-fc6f-45ef-b575-a0938cd2ef8b-kube-proxy\") pod \"kube-proxy-j4drk\" (UID: \"b06fb7fd-fc6f-45ef-b575-a0938cd2ef8b\") " pod="kube-system/kube-proxy-j4drk" Jan 24 00:50:23.915164 kubelet[2807]: I0124 00:50:23.915154 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b06fb7fd-fc6f-45ef-b575-a0938cd2ef8b-xtables-lock\") pod \"kube-proxy-j4drk\" (UID: \"b06fb7fd-fc6f-45ef-b575-a0938cd2ef8b\") " pod="kube-system/kube-proxy-j4drk" Jan 24 00:50:23.915164 kubelet[2807]: I0124 00:50:23.915172 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b06fb7fd-fc6f-45ef-b575-a0938cd2ef8b-lib-modules\") pod \"kube-proxy-j4drk\" (UID: \"b06fb7fd-fc6f-45ef-b575-a0938cd2ef8b\") " pod="kube-system/kube-proxy-j4drk" Jan 24 00:50:23.915521 kubelet[2807]: I0124 00:50:23.915187 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7nv5\" (UniqueName: \"kubernetes.io/projected/b06fb7fd-fc6f-45ef-b575-a0938cd2ef8b-kube-api-access-x7nv5\") pod \"kube-proxy-j4drk\" (UID: \"b06fb7fd-fc6f-45ef-b575-a0938cd2ef8b\") " pod="kube-system/kube-proxy-j4drk" Jan 24 00:50:24.024676 kubelet[2807]: E0124 00:50:24.024588 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 24 00:50:24.207369 kubelet[2807]: E0124 00:50:24.207057 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 24 00:50:24.208520 containerd[1593]: time="2026-01-24T00:50:24.208457248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-j4drk,Uid:b06fb7fd-fc6f-45ef-b575-a0938cd2ef8b,Namespace:kube-system,Attempt:0,}" Jan 24 00:50:24.229125 containerd[1593]: time="2026-01-24T00:50:24.229066708Z" level=info msg="connecting to shim 74a894f5bb038518b22eb8556ea586aaa999d06eb454aae18be42517eca1800e" address="unix:///run/containerd/s/ec08312822715db60e88f8e8bb24aa733127facb3d666c5152d3eacd22dd2ed9" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:50:24.260990 systemd[1]: Started cri-containerd-74a894f5bb038518b22eb8556ea586aaa999d06eb454aae18be42517eca1800e.scope - libcontainer container 74a894f5bb038518b22eb8556ea586aaa999d06eb454aae18be42517eca1800e. Jan 24 00:50:24.274000 audit: BPF prog-id=145 op=LOAD Jan 24 00:50:24.276546 kernel: kauditd_printk_skb: 42 callbacks suppressed Jan 24 00:50:24.276581 kernel: audit: type=1334 audit(1769215824.274:459): prog-id=145 op=LOAD Jan 24 00:50:24.276000 audit: BPF prog-id=146 op=LOAD Jan 24 00:50:24.280457 kernel: audit: type=1334 audit(1769215824.276:460): prog-id=146 op=LOAD Jan 24 00:50:24.276000 audit[2882]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2871 pid=2882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:24.287945 kernel: audit: type=1300 audit(1769215824.276:460): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2871 pid=2882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:24.289729 kernel: audit: type=1327 audit(1769215824.276:460): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734613839346635626230333835313862323265623835353665613538 Jan 24 00:50:24.276000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734613839346635626230333835313862323265623835353665613538 Jan 24 00:50:24.297582 kernel: audit: type=1334 audit(1769215824.276:461): prog-id=146 op=UNLOAD Jan 24 00:50:24.276000 audit: BPF prog-id=146 op=UNLOAD Jan 24 00:50:24.276000 audit[2882]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2871 pid=2882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:24.305836 kernel: audit: type=1300 audit(1769215824.276:461): arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2871 pid=2882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:24.305919 kernel: audit: type=1327 audit(1769215824.276:461): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734613839346635626230333835313862323265623835353665613538 Jan 24 00:50:24.276000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734613839346635626230333835313862323265623835353665613538 Jan 24 00:50:24.316458 kernel: audit: type=1334 audit(1769215824.276:462): prog-id=147 op=LOAD Jan 24 00:50:24.276000 audit: BPF prog-id=147 op=LOAD Jan 24 00:50:24.276000 audit[2882]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2871 pid=2882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:24.326807 kernel: audit: type=1300 audit(1769215824.276:462): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2871 pid=2882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:24.276000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734613839346635626230333835313862323265623835353665613538 Jan 24 00:50:24.333359 kubelet[2807]: E0124 00:50:24.331247 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 24 00:50:24.333822 containerd[1593]: time="2026-01-24T00:50:24.329540798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-j4drk,Uid:b06fb7fd-fc6f-45ef-b575-a0938cd2ef8b,Namespace:kube-system,Attempt:0,} returns sandbox id \"74a894f5bb038518b22eb8556ea586aaa999d06eb454aae18be42517eca1800e\"" Jan 24 00:50:24.334821 kernel: audit: type=1327 audit(1769215824.276:462): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734613839346635626230333835313862323265623835353665613538 Jan 24 00:50:24.343083 containerd[1593]: time="2026-01-24T00:50:24.343036635Z" level=info msg="CreateContainer within sandbox \"74a894f5bb038518b22eb8556ea586aaa999d06eb454aae18be42517eca1800e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 24 00:50:24.276000 audit: BPF prog-id=148 op=LOAD Jan 24 00:50:24.276000 audit[2882]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2871 pid=2882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:24.276000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734613839346635626230333835313862323265623835353665613538 Jan 24 00:50:24.276000 audit: BPF prog-id=148 op=UNLOAD Jan 24 00:50:24.276000 audit[2882]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2871 pid=2882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:24.276000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734613839346635626230333835313862323265623835353665613538 Jan 24 00:50:24.276000 audit: BPF prog-id=147 op=UNLOAD Jan 24 00:50:24.276000 audit[2882]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2871 pid=2882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:24.276000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734613839346635626230333835313862323265623835353665613538 Jan 24 00:50:24.276000 audit: BPF prog-id=149 op=LOAD Jan 24 00:50:24.276000 audit[2882]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2871 pid=2882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:24.276000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734613839346635626230333835313862323265623835353665613538 Jan 24 00:50:24.361188 containerd[1593]: time="2026-01-24T00:50:24.361128194Z" level=info msg="Container dac0c08afd20d635f61c74fdaeb6c043825a6904c116d03922c4b0c1332d2894: CDI devices from CRI Config.CDIDevices: []" Jan 24 00:50:24.372950 containerd[1593]: time="2026-01-24T00:50:24.372905650Z" level=info msg="CreateContainer within sandbox \"74a894f5bb038518b22eb8556ea586aaa999d06eb454aae18be42517eca1800e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"dac0c08afd20d635f61c74fdaeb6c043825a6904c116d03922c4b0c1332d2894\"" Jan 24 00:50:24.375822 containerd[1593]: time="2026-01-24T00:50:24.375747041Z" level=info msg="StartContainer for \"dac0c08afd20d635f61c74fdaeb6c043825a6904c116d03922c4b0c1332d2894\"" Jan 24 00:50:24.393991 containerd[1593]: time="2026-01-24T00:50:24.393915100Z" level=info msg="connecting to shim dac0c08afd20d635f61c74fdaeb6c043825a6904c116d03922c4b0c1332d2894" address="unix:///run/containerd/s/ec08312822715db60e88f8e8bb24aa733127facb3d666c5152d3eacd22dd2ed9" protocol=ttrpc version=3 Jan 24 00:50:24.399303 systemd[1]: Created slice kubepods-besteffort-podd1737110_99c5_4dd6_a713_6d8389373f9f.slice - libcontainer container kubepods-besteffort-podd1737110_99c5_4dd6_a713_6d8389373f9f.slice. Jan 24 00:50:24.417033 systemd[1]: Started cri-containerd-dac0c08afd20d635f61c74fdaeb6c043825a6904c116d03922c4b0c1332d2894.scope - libcontainer container dac0c08afd20d635f61c74fdaeb6c043825a6904c116d03922c4b0c1332d2894. Jan 24 00:50:24.425943 kubelet[2807]: I0124 00:50:24.425916 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rt5dl\" (UniqueName: \"kubernetes.io/projected/d1737110-99c5-4dd6-a713-6d8389373f9f-kube-api-access-rt5dl\") pod \"tigera-operator-65cdcdfd6d-l42m5\" (UID: \"d1737110-99c5-4dd6-a713-6d8389373f9f\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-l42m5" Jan 24 00:50:24.426042 kubelet[2807]: I0124 00:50:24.425982 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d1737110-99c5-4dd6-a713-6d8389373f9f-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-l42m5\" (UID: \"d1737110-99c5-4dd6-a713-6d8389373f9f\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-l42m5" Jan 24 00:50:24.464000 audit: BPF prog-id=150 op=LOAD Jan 24 00:50:24.464000 audit[2907]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2871 pid=2907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:24.464000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461633063303861666432306436333566363163373466646165623663 Jan 24 00:50:24.464000 audit: BPF prog-id=151 op=LOAD Jan 24 00:50:24.464000 audit[2907]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2871 pid=2907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:24.464000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461633063303861666432306436333566363163373466646165623663 Jan 24 00:50:24.464000 audit: BPF prog-id=151 op=UNLOAD Jan 24 00:50:24.464000 audit[2907]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2871 pid=2907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:24.464000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461633063303861666432306436333566363163373466646165623663 Jan 24 00:50:24.464000 audit: BPF prog-id=150 op=UNLOAD Jan 24 00:50:24.464000 audit[2907]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2871 pid=2907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:24.464000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461633063303861666432306436333566363163373466646165623663 Jan 24 00:50:24.464000 audit: BPF prog-id=152 op=LOAD Jan 24 00:50:24.464000 audit[2907]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2871 pid=2907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:24.464000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461633063303861666432306436333566363163373466646165623663 Jan 24 00:50:24.486159 containerd[1593]: time="2026-01-24T00:50:24.486065316Z" level=info msg="StartContainer for \"dac0c08afd20d635f61c74fdaeb6c043825a6904c116d03922c4b0c1332d2894\" returns successfully" Jan 24 00:50:24.705000 audit[2973]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=2973 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:50:24.705000 audit[2973]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc19c3db90 a2=0 a3=7ffc19c3db7c items=0 ppid=2920 pid=2973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:24.705000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 24 00:50:24.706914 containerd[1593]: time="2026-01-24T00:50:24.705895386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-l42m5,Uid:d1737110-99c5-4dd6-a713-6d8389373f9f,Namespace:tigera-operator,Attempt:0,}" Jan 24 00:50:24.707000 audit[2975]: NETFILTER_CFG table=nat:55 family=2 entries=1 op=nft_register_chain pid=2975 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:50:24.707000 audit[2975]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd331b9c10 a2=0 a3=7ffd331b9bfc items=0 ppid=2920 pid=2975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:24.707000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 24 00:50:24.708000 audit[2976]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_chain pid=2976 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:50:24.708000 audit[2976]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffef6c70420 a2=0 a3=7ffef6c7040c items=0 ppid=2920 pid=2976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:24.708000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 24 00:50:24.710000 audit[2974]: NETFILTER_CFG table=mangle:57 family=10 entries=1 op=nft_register_chain pid=2974 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:50:24.710000 audit[2974]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc2ccdc630 a2=0 a3=7ffc2ccdc61c items=0 ppid=2920 pid=2974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:24.710000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 24 00:50:24.717000 audit[2979]: NETFILTER_CFG table=nat:58 family=10 entries=1 op=nft_register_chain pid=2979 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:50:24.717000 audit[2979]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd9c9e2650 a2=0 a3=7ffd9c9e263c items=0 ppid=2920 pid=2979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:24.717000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 24 00:50:24.721000 audit[2981]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=2981 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:50:24.727448 kubelet[2807]: E0124 00:50:24.727414 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 24 00:50:24.721000 audit[2981]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd30395dc0 a2=0 a3=7ffd30395dac items=0 ppid=2920 pid=2981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:24.729953 kubelet[2807]: E0124 00:50:24.729904 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 24 00:50:24.721000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 24 00:50:24.739830 containerd[1593]: time="2026-01-24T00:50:24.739747233Z" level=info msg="connecting to shim e0aea1963dc922ee2bbb48efb2f98126e2a6354fb965e12a3dcaa12e8308105f" address="unix:///run/containerd/s/9b0c27663dfdac8f846a8f8731f6e4eb70a8193bb595a398d34b605444d2a5a6" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:50:24.754924 kubelet[2807]: I0124 00:50:24.754488 2807 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-j4drk" podStartSLOduration=1.7544720200000001 podStartE2EDuration="1.75447202s" podCreationTimestamp="2026-01-24 00:50:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:50:24.741460694 +0000 UTC m=+6.142172510" watchObservedRunningTime="2026-01-24 00:50:24.75447202 +0000 UTC m=+6.155183836" Jan 24 00:50:24.776947 systemd[1]: Started cri-containerd-e0aea1963dc922ee2bbb48efb2f98126e2a6354fb965e12a3dcaa12e8308105f.scope - libcontainer container e0aea1963dc922ee2bbb48efb2f98126e2a6354fb965e12a3dcaa12e8308105f. Jan 24 00:50:24.791000 audit: BPF prog-id=153 op=LOAD Jan 24 00:50:24.791000 audit: BPF prog-id=154 op=LOAD Jan 24 00:50:24.791000 audit[3004]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2992 pid=3004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:24.791000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530616561313936336463393232656532626262343865666232663938 Jan 24 00:50:24.791000 audit: BPF prog-id=154 op=UNLOAD Jan 24 00:50:24.791000 audit[3004]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2992 pid=3004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:24.791000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530616561313936336463393232656532626262343865666232663938 Jan 24 00:50:24.791000 audit: BPF prog-id=155 op=LOAD Jan 24 00:50:24.791000 audit[3004]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2992 pid=3004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:24.791000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530616561313936336463393232656532626262343865666232663938 Jan 24 00:50:24.791000 audit: BPF prog-id=156 op=LOAD Jan 24 00:50:24.791000 audit[3004]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2992 pid=3004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:24.791000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530616561313936336463393232656532626262343865666232663938 Jan 24 00:50:24.791000 audit: BPF prog-id=156 op=UNLOAD Jan 24 00:50:24.791000 audit[3004]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2992 pid=3004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:24.791000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530616561313936336463393232656532626262343865666232663938 Jan 24 00:50:24.792000 audit: BPF prog-id=155 op=UNLOAD Jan 24 00:50:24.792000 audit[3004]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2992 pid=3004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:24.792000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530616561313936336463393232656532626262343865666232663938 Jan 24 00:50:24.792000 audit: BPF prog-id=157 op=LOAD Jan 24 00:50:24.792000 audit[3004]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2992 pid=3004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:24.792000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530616561313936336463393232656532626262343865666232663938 Jan 24 00:50:24.817000 audit[3022]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3022 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:50:24.817000 audit[3022]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffecde64500 a2=0 a3=7ffecde644ec items=0 ppid=2920 pid=3022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:24.817000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 24 00:50:24.826000 audit[3025]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3025 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:50:24.826000 audit[3025]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc95251930 a2=0 a3=7ffc9525191c items=0 ppid=2920 pid=3025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:24.826000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73002D Jan 24 00:50:24.836000 audit[3032]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3032 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:50:24.836000 audit[3032]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffe5293c280 a2=0 a3=7ffe5293c26c items=0 ppid=2920 pid=3032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:24.836000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73 Jan 24 00:50:24.837955 containerd[1593]: time="2026-01-24T00:50:24.837896012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-l42m5,Uid:d1737110-99c5-4dd6-a713-6d8389373f9f,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"e0aea1963dc922ee2bbb48efb2f98126e2a6354fb965e12a3dcaa12e8308105f\"" Jan 24 00:50:24.840481 containerd[1593]: time="2026-01-24T00:50:24.840236793Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 24 00:50:24.840000 audit[3033]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3033 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:50:24.840000 audit[3033]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcfc533c70 a2=0 a3=7ffcfc533c5c items=0 ppid=2920 pid=3033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:24.840000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 24 00:50:24.843000 audit[3035]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3035 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:50:24.843000 audit[3035]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcacf7a860 a2=0 a3=7ffcacf7a84c items=0 ppid=2920 pid=3035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:24.843000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 24 00:50:24.845000 audit[3036]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3036 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:50:24.845000 audit[3036]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd638d3180 a2=0 a3=7ffd638d316c items=0 ppid=2920 pid=3036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:24.845000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D5345525649434553002D740066696C746572 Jan 24 00:50:24.848000 audit[3038]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3038 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:50:24.848000 audit[3038]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff409dda70 a2=0 a3=7fff409dda5c items=0 ppid=2920 pid=3038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:24.848000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 24 00:50:24.854000 audit[3041]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3041 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:50:24.854000 audit[3041]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd89770250 a2=0 a3=7ffd8977023c items=0 ppid=2920 pid=3041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:24.854000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 24 00:50:24.855000 audit[3042]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3042 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:50:24.855000 audit[3042]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd6e100600 a2=0 a3=7ffd6e1005ec items=0 ppid=2920 pid=3042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:24.855000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D464F5257415244002D740066696C746572 Jan 24 00:50:24.859000 audit[3044]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3044 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:50:24.859000 audit[3044]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe0de4a9d0 a2=0 a3=7ffe0de4a9bc items=0 ppid=2920 pid=3044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:24.859000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 24 00:50:24.860000 audit[3045]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3045 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:50:24.860000 audit[3045]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe6c928c70 a2=0 a3=7ffe6c928c5c items=0 ppid=2920 pid=3045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:24.860000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 24 00:50:24.864000 audit[3047]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3047 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:50:24.864000 audit[3047]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff53864790 a2=0 a3=7fff5386477c items=0 ppid=2920 pid=3047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:24.864000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F5859 Jan 24 00:50:24.870000 audit[3050]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3050 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:50:24.870000 audit[3050]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdaf8d6800 a2=0 a3=7ffdaf8d67ec items=0 ppid=2920 pid=3050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:24.870000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F58 Jan 24 00:50:24.875000 audit[3053]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3053 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:50:24.875000 audit[3053]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffc29ca4b0 a2=0 a3=7fffc29ca49c items=0 ppid=2920 pid=3053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:24.875000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F Jan 24 00:50:24.877000 audit[3054]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3054 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:50:24.877000 audit[3054]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc1d54c3e0 a2=0 a3=7ffc1d54c3cc items=0 ppid=2920 pid=3054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:24.877000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D5345525649434553002D74006E6174 Jan 24 00:50:24.880000 audit[3056]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3056 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:50:24.880000 audit[3056]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffdb27fc7a0 a2=0 a3=7ffdb27fc78c items=0 ppid=2920 pid=3056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:24.880000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 24 00:50:24.884000 audit[3059]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3059 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:50:24.884000 audit[3059]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd9e21e0b0 a2=0 a3=7ffd9e21e09c items=0 ppid=2920 pid=3059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:24.884000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 24 00:50:24.886000 audit[3060]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3060 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:50:24.886000 audit[3060]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe98cd18b0 a2=0 a3=7ffe98cd189c items=0 ppid=2920 pid=3060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:24.886000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 24 00:50:24.889000 audit[3062]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3062 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:50:24.889000 audit[3062]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffea3f70560 a2=0 a3=7ffea3f7054c items=0 ppid=2920 pid=3062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:24.889000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 24 00:50:24.911000 audit[3068]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3068 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:50:24.911000 audit[3068]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd6032a7a0 a2=0 a3=7ffd6032a78c items=0 ppid=2920 pid=3068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:24.911000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:50:24.920000 audit[3068]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3068 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:50:24.920000 audit[3068]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffd6032a7a0 a2=0 a3=7ffd6032a78c items=0 ppid=2920 pid=3068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:24.920000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:50:24.922000 audit[3073]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3073 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:50:24.922000 audit[3073]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffc04fafc20 a2=0 a3=7ffc04fafc0c items=0 ppid=2920 pid=3073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:24.922000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 24 00:50:24.925000 audit[3075]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3075 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:50:24.925000 audit[3075]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffe2fb53f30 a2=0 a3=7ffe2fb53f1c items=0 ppid=2920 pid=3075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:24.925000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73 Jan 24 00:50:24.930000 audit[3078]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3078 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:50:24.930000 audit[3078]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff8aaee240 a2=0 a3=7fff8aaee22c items=0 ppid=2920 pid=3078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:24.930000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C Jan 24 00:50:24.932000 audit[3079]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3079 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:50:24.932000 audit[3079]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdbb65eac0 a2=0 a3=7ffdbb65eaac items=0 ppid=2920 pid=3079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:24.932000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 24 00:50:24.935000 audit[3081]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3081 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:50:24.935000 audit[3081]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff920de6b0 a2=0 a3=7fff920de69c items=0 ppid=2920 pid=3081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:24.935000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 24 00:50:24.936000 audit[3082]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3082 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:50:24.936000 audit[3082]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffca73ebc20 a2=0 a3=7ffca73ebc0c items=0 ppid=2920 pid=3082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:24.936000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D5345525649434553002D740066696C746572 Jan 24 00:50:24.939000 audit[3084]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3084 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:50:24.939000 audit[3084]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff8d810d40 a2=0 a3=7fff8d810d2c items=0 ppid=2920 pid=3084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:24.939000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 24 00:50:24.944000 audit[3087]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3087 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:50:24.944000 audit[3087]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffef5bf10b0 a2=0 a3=7ffef5bf109c items=0 ppid=2920 pid=3087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:24.944000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 24 00:50:24.945000 audit[3088]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3088 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:50:24.945000 audit[3088]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc66e14b90 a2=0 a3=7ffc66e14b7c items=0 ppid=2920 pid=3088 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:24.945000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D464F5257415244002D740066696C746572 Jan 24 00:50:24.948000 audit[3090]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3090 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:50:24.948000 audit[3090]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdf63fa310 a2=0 a3=7ffdf63fa2fc items=0 ppid=2920 pid=3090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:24.948000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 24 00:50:24.950000 audit[3091]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3091 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:50:24.950000 audit[3091]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe6310d9e0 a2=0 a3=7ffe6310d9cc items=0 ppid=2920 pid=3091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:24.950000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 24 00:50:24.953000 audit[3093]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3093 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:50:24.953000 audit[3093]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdbf562410 a2=0 a3=7ffdbf5623fc items=0 ppid=2920 pid=3093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:24.953000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F58 Jan 24 00:50:24.959000 audit[3096]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3096 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:50:24.959000 audit[3096]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcae7f3430 a2=0 a3=7ffcae7f341c items=0 ppid=2920 pid=3096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:24.959000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F Jan 24 00:50:24.964000 audit[3099]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3099 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:50:24.964000 audit[3099]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe108affa0 a2=0 a3=7ffe108aff8c items=0 ppid=2920 pid=3099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:24.964000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D5052 Jan 24 00:50:24.965000 audit[3100]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3100 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:50:24.965000 audit[3100]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd670f8db0 a2=0 a3=7ffd670f8d9c items=0 ppid=2920 pid=3100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:24.965000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D5345525649434553002D74006E6174 Jan 24 00:50:24.968000 audit[3102]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3102 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:50:24.968000 audit[3102]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffe70fd3800 a2=0 a3=7ffe70fd37ec items=0 ppid=2920 pid=3102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:24.968000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 24 00:50:24.976000 audit[3105]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3105 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:50:24.976000 audit[3105]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffffdca2cf0 a2=0 a3=7ffffdca2cdc items=0 ppid=2920 pid=3105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:24.976000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 24 00:50:24.977000 audit[3106]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3106 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:50:24.977000 audit[3106]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff7e0f1f90 a2=0 a3=7fff7e0f1f7c items=0 ppid=2920 pid=3106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:24.977000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 24 00:50:24.980000 audit[3108]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3108 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:50:24.980000 audit[3108]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffd8f6896a0 a2=0 a3=7ffd8f68968c items=0 ppid=2920 pid=3108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:24.980000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 24 00:50:24.982000 audit[3109]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3109 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:50:24.982000 audit[3109]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffecf11c960 a2=0 a3=7ffecf11c94c items=0 ppid=2920 pid=3109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:24.982000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 24 00:50:24.985000 audit[3111]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3111 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:50:24.985000 audit[3111]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc06fef700 a2=0 a3=7ffc06fef6ec items=0 ppid=2920 pid=3111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:24.985000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 24 00:50:24.990000 audit[3114]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3114 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:50:24.990000 audit[3114]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff93d390b0 a2=0 a3=7fff93d3909c items=0 ppid=2920 pid=3114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:24.990000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 24 00:50:24.994000 audit[3116]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3116 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 24 00:50:24.994000 audit[3116]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffea84e2a90 a2=0 a3=7ffea84e2a7c items=0 ppid=2920 pid=3116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:24.994000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:50:24.995000 audit[3116]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3116 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 24 00:50:24.995000 audit[3116]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffea84e2a90 a2=0 a3=7ffea84e2a7c items=0 ppid=2920 pid=3116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:24.995000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:50:25.824348 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1109286236.mount: Deactivated successfully. Jan 24 00:50:26.437202 containerd[1593]: time="2026-01-24T00:50:26.437135668Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:50:26.437993 containerd[1593]: time="2026-01-24T00:50:26.437820142Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Jan 24 00:50:26.438505 containerd[1593]: time="2026-01-24T00:50:26.438479254Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:50:26.442145 containerd[1593]: time="2026-01-24T00:50:26.442114507Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:50:26.442774 containerd[1593]: time="2026-01-24T00:50:26.442742389Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 1.602478986s" Jan 24 00:50:26.442886 containerd[1593]: time="2026-01-24T00:50:26.442767419Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 24 00:50:26.447091 containerd[1593]: time="2026-01-24T00:50:26.447039445Z" level=info msg="CreateContainer within sandbox \"e0aea1963dc922ee2bbb48efb2f98126e2a6354fb965e12a3dcaa12e8308105f\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 24 00:50:26.454433 containerd[1593]: time="2026-01-24T00:50:26.453344198Z" level=info msg="Container e962764d5e8056cf6fcf9f978f31b90c59f1b6453be2d73eb7187799cb214e2d: CDI devices from CRI Config.CDIDevices: []" Jan 24 00:50:26.457530 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2848400444.mount: Deactivated successfully. Jan 24 00:50:26.466712 containerd[1593]: time="2026-01-24T00:50:26.466689288Z" level=info msg="CreateContainer within sandbox \"e0aea1963dc922ee2bbb48efb2f98126e2a6354fb965e12a3dcaa12e8308105f\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"e962764d5e8056cf6fcf9f978f31b90c59f1b6453be2d73eb7187799cb214e2d\"" Jan 24 00:50:26.467581 containerd[1593]: time="2026-01-24T00:50:26.467144440Z" level=info msg="StartContainer for \"e962764d5e8056cf6fcf9f978f31b90c59f1b6453be2d73eb7187799cb214e2d\"" Jan 24 00:50:26.467926 containerd[1593]: time="2026-01-24T00:50:26.467872253Z" level=info msg="connecting to shim e962764d5e8056cf6fcf9f978f31b90c59f1b6453be2d73eb7187799cb214e2d" address="unix:///run/containerd/s/9b0c27663dfdac8f846a8f8731f6e4eb70a8193bb595a398d34b605444d2a5a6" protocol=ttrpc version=3 Jan 24 00:50:26.494961 systemd[1]: Started cri-containerd-e962764d5e8056cf6fcf9f978f31b90c59f1b6453be2d73eb7187799cb214e2d.scope - libcontainer container e962764d5e8056cf6fcf9f978f31b90c59f1b6453be2d73eb7187799cb214e2d. Jan 24 00:50:26.506000 audit: BPF prog-id=158 op=LOAD Jan 24 00:50:26.506000 audit: BPF prog-id=159 op=LOAD Jan 24 00:50:26.506000 audit[3125]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2992 pid=3125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:26.506000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539363237363464356538303536636636666366396639373866333162 Jan 24 00:50:26.506000 audit: BPF prog-id=159 op=UNLOAD Jan 24 00:50:26.506000 audit[3125]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2992 pid=3125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:26.506000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539363237363464356538303536636636666366396639373866333162 Jan 24 00:50:26.506000 audit: BPF prog-id=160 op=LOAD Jan 24 00:50:26.506000 audit[3125]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2992 pid=3125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:26.506000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539363237363464356538303536636636666366396639373866333162 Jan 24 00:50:26.506000 audit: BPF prog-id=161 op=LOAD Jan 24 00:50:26.506000 audit[3125]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2992 pid=3125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:26.506000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539363237363464356538303536636636666366396639373866333162 Jan 24 00:50:26.506000 audit: BPF prog-id=161 op=UNLOAD Jan 24 00:50:26.506000 audit[3125]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2992 pid=3125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:26.506000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539363237363464356538303536636636666366396639373866333162 Jan 24 00:50:26.506000 audit: BPF prog-id=160 op=UNLOAD Jan 24 00:50:26.506000 audit[3125]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2992 pid=3125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:26.506000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539363237363464356538303536636636666366396639373866333162 Jan 24 00:50:26.507000 audit: BPF prog-id=162 op=LOAD Jan 24 00:50:26.507000 audit[3125]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2992 pid=3125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:26.507000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539363237363464356538303536636636666366396639373866333162 Jan 24 00:50:26.526658 containerd[1593]: time="2026-01-24T00:50:26.526625840Z" level=info msg="StartContainer for \"e962764d5e8056cf6fcf9f978f31b90c59f1b6453be2d73eb7187799cb214e2d\" returns successfully" Jan 24 00:50:26.742031 kubelet[2807]: I0124 00:50:26.741724 2807 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-l42m5" podStartSLOduration=1.137420814 podStartE2EDuration="2.741708566s" podCreationTimestamp="2026-01-24 00:50:24 +0000 UTC" firstStartedPulling="2026-01-24 00:50:24.839973133 +0000 UTC m=+6.240684959" lastFinishedPulling="2026-01-24 00:50:26.444260885 +0000 UTC m=+7.844972711" observedRunningTime="2026-01-24 00:50:26.741541675 +0000 UTC m=+8.142253491" watchObservedRunningTime="2026-01-24 00:50:26.741708566 +0000 UTC m=+8.142420382" Jan 24 00:50:31.823682 kubelet[2807]: E0124 00:50:31.823638 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 24 00:50:31.877615 sudo[1856]: pam_unix(sudo:session): session closed for user root Jan 24 00:50:31.886116 kernel: kauditd_printk_skb: 224 callbacks suppressed Jan 24 00:50:31.886167 kernel: audit: type=1106 audit(1769215831.876:539): pid=1856 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 00:50:31.876000 audit[1856]: USER_END pid=1856 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 00:50:31.892599 kernel: audit: type=1104 audit(1769215831.876:540): pid=1856 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 00:50:31.876000 audit[1856]: CRED_DISP pid=1856 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 00:50:31.902864 sshd[1855]: Connection closed by 68.220.241.50 port 54074 Jan 24 00:50:31.904575 sshd-session[1851]: pam_unix(sshd:session): session closed for user core Jan 24 00:50:31.905000 audit[1851]: USER_END pid=1851 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:50:31.913742 systemd[1]: sshd@6-172.239.50.209:22-68.220.241.50:54074.service: Deactivated successfully. Jan 24 00:50:31.916313 kernel: audit: type=1106 audit(1769215831.905:541): pid=1851 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:50:31.905000 audit[1851]: CRED_DISP pid=1851 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:50:31.919838 systemd[1]: session-8.scope: Deactivated successfully. Jan 24 00:50:31.920142 systemd[1]: session-8.scope: Consumed 4.532s CPU time, 229M memory peak. Jan 24 00:50:31.924609 systemd-logind[1569]: Session 8 logged out. Waiting for processes to exit. Jan 24 00:50:31.924998 kernel: audit: type=1104 audit(1769215831.905:542): pid=1851 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:50:31.913000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.239.50.209:22-68.220.241.50:54074 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:50:31.927834 systemd-logind[1569]: Removed session 8. Jan 24 00:50:31.931824 kernel: audit: type=1131 audit(1769215831.913:543): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.239.50.209:22-68.220.241.50:54074 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:50:32.505160 kubelet[2807]: E0124 00:50:32.505125 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 24 00:50:32.641000 audit[3205]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3205 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:50:32.646868 kernel: audit: type=1325 audit(1769215832.641:544): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3205 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:50:32.641000 audit[3205]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fffff304800 a2=0 a3=7fffff3047ec items=0 ppid=2920 pid=3205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:32.658815 kernel: audit: type=1300 audit(1769215832.641:544): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fffff304800 a2=0 a3=7fffff3047ec items=0 ppid=2920 pid=3205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:32.641000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:50:32.665840 kernel: audit: type=1327 audit(1769215832.641:544): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:50:32.647000 audit[3205]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3205 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:50:32.672809 kernel: audit: type=1325 audit(1769215832.647:545): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3205 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:50:32.647000 audit[3205]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffff304800 a2=0 a3=0 items=0 ppid=2920 pid=3205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:32.684809 kernel: audit: type=1300 audit(1769215832.647:545): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffff304800 a2=0 a3=0 items=0 ppid=2920 pid=3205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:32.647000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:50:32.747111 kubelet[2807]: E0124 00:50:32.747079 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 24 00:50:33.674000 audit[3207]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3207 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:50:33.674000 audit[3207]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffe81d8e3e0 a2=0 a3=7ffe81d8e3cc items=0 ppid=2920 pid=3207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:33.674000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:50:33.687000 audit[3207]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3207 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:50:33.687000 audit[3207]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe81d8e3e0 a2=0 a3=0 items=0 ppid=2920 pid=3207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:33.687000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:50:34.803000 audit[3209]: NETFILTER_CFG table=filter:109 family=2 entries=18 op=nft_register_rule pid=3209 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:50:34.803000 audit[3209]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffcdac48730 a2=0 a3=7ffcdac4871c items=0 ppid=2920 pid=3209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:34.803000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:50:34.806000 audit[3209]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3209 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:50:34.806000 audit[3209]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcdac48730 a2=0 a3=0 items=0 ppid=2920 pid=3209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:34.806000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:50:36.302000 audit[3211]: NETFILTER_CFG table=filter:111 family=2 entries=21 op=nft_register_rule pid=3211 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:50:36.302000 audit[3211]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffc14b2e8f0 a2=0 a3=7ffc14b2e8dc items=0 ppid=2920 pid=3211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:36.302000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:50:36.322473 systemd[1]: Created slice kubepods-besteffort-pod5b71e833_2f0f_4e21_8fb9_d106454e0413.slice - libcontainer container kubepods-besteffort-pod5b71e833_2f0f_4e21_8fb9_d106454e0413.slice. Jan 24 00:50:36.332000 audit[3211]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3211 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:50:36.332000 audit[3211]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc14b2e8f0 a2=0 a3=0 items=0 ppid=2920 pid=3211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:36.332000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:50:36.405920 kubelet[2807]: I0124 00:50:36.405869 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtspr\" (UniqueName: \"kubernetes.io/projected/5b71e833-2f0f-4e21-8fb9-d106454e0413-kube-api-access-mtspr\") pod \"calico-typha-64cb8b8766-jdzjs\" (UID: \"5b71e833-2f0f-4e21-8fb9-d106454e0413\") " pod="calico-system/calico-typha-64cb8b8766-jdzjs" Jan 24 00:50:36.406337 kubelet[2807]: I0124 00:50:36.405960 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b71e833-2f0f-4e21-8fb9-d106454e0413-tigera-ca-bundle\") pod \"calico-typha-64cb8b8766-jdzjs\" (UID: \"5b71e833-2f0f-4e21-8fb9-d106454e0413\") " pod="calico-system/calico-typha-64cb8b8766-jdzjs" Jan 24 00:50:36.406337 kubelet[2807]: I0124 00:50:36.405988 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/5b71e833-2f0f-4e21-8fb9-d106454e0413-typha-certs\") pod \"calico-typha-64cb8b8766-jdzjs\" (UID: \"5b71e833-2f0f-4e21-8fb9-d106454e0413\") " pod="calico-system/calico-typha-64cb8b8766-jdzjs" Jan 24 00:50:36.531906 systemd[1]: Created slice kubepods-besteffort-podf1bafa18_b901_44af_bf4b_c41ce0187660.slice - libcontainer container kubepods-besteffort-podf1bafa18_b901_44af_bf4b_c41ce0187660.slice. Jan 24 00:50:36.607285 kubelet[2807]: I0124 00:50:36.607081 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f1bafa18-b901-44af-bf4b-c41ce0187660-cni-bin-dir\") pod \"calico-node-rvvw8\" (UID: \"f1bafa18-b901-44af-bf4b-c41ce0187660\") " pod="calico-system/calico-node-rvvw8" Jan 24 00:50:36.607285 kubelet[2807]: I0124 00:50:36.607122 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f1bafa18-b901-44af-bf4b-c41ce0187660-flexvol-driver-host\") pod \"calico-node-rvvw8\" (UID: \"f1bafa18-b901-44af-bf4b-c41ce0187660\") " pod="calico-system/calico-node-rvvw8" Jan 24 00:50:36.607285 kubelet[2807]: I0124 00:50:36.607147 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f1bafa18-b901-44af-bf4b-c41ce0187660-var-lib-calico\") pod \"calico-node-rvvw8\" (UID: \"f1bafa18-b901-44af-bf4b-c41ce0187660\") " pod="calico-system/calico-node-rvvw8" Jan 24 00:50:36.607285 kubelet[2807]: I0124 00:50:36.607170 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f1bafa18-b901-44af-bf4b-c41ce0187660-cni-net-dir\") pod \"calico-node-rvvw8\" (UID: \"f1bafa18-b901-44af-bf4b-c41ce0187660\") " pod="calico-system/calico-node-rvvw8" Jan 24 00:50:36.607285 kubelet[2807]: I0124 00:50:36.607186 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6595\" (UniqueName: \"kubernetes.io/projected/f1bafa18-b901-44af-bf4b-c41ce0187660-kube-api-access-s6595\") pod \"calico-node-rvvw8\" (UID: \"f1bafa18-b901-44af-bf4b-c41ce0187660\") " pod="calico-system/calico-node-rvvw8" Jan 24 00:50:36.607554 kubelet[2807]: I0124 00:50:36.607202 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f1bafa18-b901-44af-bf4b-c41ce0187660-policysync\") pod \"calico-node-rvvw8\" (UID: \"f1bafa18-b901-44af-bf4b-c41ce0187660\") " pod="calico-system/calico-node-rvvw8" Jan 24 00:50:36.607554 kubelet[2807]: I0124 00:50:36.607216 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f1bafa18-b901-44af-bf4b-c41ce0187660-xtables-lock\") pod \"calico-node-rvvw8\" (UID: \"f1bafa18-b901-44af-bf4b-c41ce0187660\") " pod="calico-system/calico-node-rvvw8" Jan 24 00:50:36.607554 kubelet[2807]: I0124 00:50:36.607233 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f1bafa18-b901-44af-bf4b-c41ce0187660-cni-log-dir\") pod \"calico-node-rvvw8\" (UID: \"f1bafa18-b901-44af-bf4b-c41ce0187660\") " pod="calico-system/calico-node-rvvw8" Jan 24 00:50:36.607554 kubelet[2807]: I0124 00:50:36.607246 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f1bafa18-b901-44af-bf4b-c41ce0187660-node-certs\") pod \"calico-node-rvvw8\" (UID: \"f1bafa18-b901-44af-bf4b-c41ce0187660\") " pod="calico-system/calico-node-rvvw8" Jan 24 00:50:36.607554 kubelet[2807]: I0124 00:50:36.607296 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f1bafa18-b901-44af-bf4b-c41ce0187660-tigera-ca-bundle\") pod \"calico-node-rvvw8\" (UID: \"f1bafa18-b901-44af-bf4b-c41ce0187660\") " pod="calico-system/calico-node-rvvw8" Jan 24 00:50:36.607662 kubelet[2807]: I0124 00:50:36.607339 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f1bafa18-b901-44af-bf4b-c41ce0187660-var-run-calico\") pod \"calico-node-rvvw8\" (UID: \"f1bafa18-b901-44af-bf4b-c41ce0187660\") " pod="calico-system/calico-node-rvvw8" Jan 24 00:50:36.607662 kubelet[2807]: I0124 00:50:36.607373 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f1bafa18-b901-44af-bf4b-c41ce0187660-lib-modules\") pod \"calico-node-rvvw8\" (UID: \"f1bafa18-b901-44af-bf4b-c41ce0187660\") " pod="calico-system/calico-node-rvvw8" Jan 24 00:50:36.633312 kubelet[2807]: E0124 00:50:36.633230 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 24 00:50:36.634190 containerd[1593]: time="2026-01-24T00:50:36.634003334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-64cb8b8766-jdzjs,Uid:5b71e833-2f0f-4e21-8fb9-d106454e0413,Namespace:calico-system,Attempt:0,}" Jan 24 00:50:36.654747 containerd[1593]: time="2026-01-24T00:50:36.654654239Z" level=info msg="connecting to shim 5d1ba4233743152e00f96650ae80dbe024213fe94f0d4a7a7048dc7a4870a499" address="unix:///run/containerd/s/70c5e92ed900b383e590bf25c3ac95634d557f38793c3db9c06fa542f6950504" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:50:36.693008 systemd[1]: Started cri-containerd-5d1ba4233743152e00f96650ae80dbe024213fe94f0d4a7a7048dc7a4870a499.scope - libcontainer container 5d1ba4233743152e00f96650ae80dbe024213fe94f0d4a7a7048dc7a4870a499. Jan 24 00:50:36.704482 kubelet[2807]: E0124 00:50:36.704440 2807 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dbqvx" podUID="f3c105a9-c51e-4146-8f7b-cd2b2b3fff5c" Jan 24 00:50:36.711933 kubelet[2807]: E0124 00:50:36.710553 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:36.711933 kubelet[2807]: W0124 00:50:36.710569 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:36.711933 kubelet[2807]: E0124 00:50:36.710604 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:36.722926 kubelet[2807]: E0124 00:50:36.722899 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:36.723155 kubelet[2807]: W0124 00:50:36.723099 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:36.723155 kubelet[2807]: E0124 00:50:36.723124 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:36.740505 kubelet[2807]: E0124 00:50:36.740437 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:36.740505 kubelet[2807]: W0124 00:50:36.740456 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:36.740505 kubelet[2807]: E0124 00:50:36.740472 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:36.782046 kubelet[2807]: E0124 00:50:36.782021 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:36.782328 kubelet[2807]: W0124 00:50:36.782199 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:36.782328 kubelet[2807]: E0124 00:50:36.782223 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:36.782599 kubelet[2807]: E0124 00:50:36.782505 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:36.782599 kubelet[2807]: W0124 00:50:36.782515 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:36.782599 kubelet[2807]: E0124 00:50:36.782524 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:36.782773 kubelet[2807]: E0124 00:50:36.782763 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:36.782922 kubelet[2807]: W0124 00:50:36.782865 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:36.782922 kubelet[2807]: E0124 00:50:36.782879 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:36.784043 kubelet[2807]: E0124 00:50:36.783874 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:36.784043 kubelet[2807]: W0124 00:50:36.783885 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:36.784043 kubelet[2807]: E0124 00:50:36.783895 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:36.785109 kubelet[2807]: E0124 00:50:36.785063 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:36.785109 kubelet[2807]: W0124 00:50:36.785105 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:36.785177 kubelet[2807]: E0124 00:50:36.785124 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:36.785398 kubelet[2807]: E0124 00:50:36.785378 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:36.785398 kubelet[2807]: W0124 00:50:36.785392 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:36.785637 kubelet[2807]: E0124 00:50:36.785404 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:36.785925 kubelet[2807]: E0124 00:50:36.785905 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:36.785925 kubelet[2807]: W0124 00:50:36.785920 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:36.786854 kubelet[2807]: E0124 00:50:36.785929 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:36.787089 kubelet[2807]: E0124 00:50:36.787071 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:36.787089 kubelet[2807]: W0124 00:50:36.787085 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:36.787145 kubelet[2807]: E0124 00:50:36.787106 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:36.787358 kubelet[2807]: E0124 00:50:36.787338 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:36.787386 kubelet[2807]: W0124 00:50:36.787353 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:36.787386 kubelet[2807]: E0124 00:50:36.787381 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:36.788964 kubelet[2807]: E0124 00:50:36.788943 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:36.788964 kubelet[2807]: W0124 00:50:36.788958 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:36.788964 kubelet[2807]: E0124 00:50:36.788967 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:36.789186 kubelet[2807]: E0124 00:50:36.789167 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:36.789186 kubelet[2807]: W0124 00:50:36.789181 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:36.789235 kubelet[2807]: E0124 00:50:36.789189 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:36.789383 kubelet[2807]: E0124 00:50:36.789363 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:36.789383 kubelet[2807]: W0124 00:50:36.789376 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:36.789431 kubelet[2807]: E0124 00:50:36.789385 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:36.790447 kubelet[2807]: E0124 00:50:36.790419 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:36.790447 kubelet[2807]: W0124 00:50:36.790435 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:36.790447 kubelet[2807]: E0124 00:50:36.790445 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:36.790639 kubelet[2807]: E0124 00:50:36.790618 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:36.790639 kubelet[2807]: W0124 00:50:36.790632 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:36.790639 kubelet[2807]: E0124 00:50:36.790640 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:36.790851 kubelet[2807]: E0124 00:50:36.790833 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:36.790851 kubelet[2807]: W0124 00:50:36.790846 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:36.790908 kubelet[2807]: E0124 00:50:36.790854 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:36.791049 kubelet[2807]: E0124 00:50:36.791028 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:36.791049 kubelet[2807]: W0124 00:50:36.791042 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:36.791049 kubelet[2807]: E0124 00:50:36.791050 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:36.791674 kubelet[2807]: E0124 00:50:36.791649 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:36.791674 kubelet[2807]: W0124 00:50:36.791666 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:36.791674 kubelet[2807]: E0124 00:50:36.791678 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:36.791912 kubelet[2807]: E0124 00:50:36.791891 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:36.791912 kubelet[2807]: W0124 00:50:36.791906 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:36.791912 kubelet[2807]: E0124 00:50:36.791914 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:36.793150 kubelet[2807]: E0124 00:50:36.793106 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:36.793150 kubelet[2807]: W0124 00:50:36.793122 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:36.793150 kubelet[2807]: E0124 00:50:36.793132 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:36.793328 kubelet[2807]: E0124 00:50:36.793304 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:36.793328 kubelet[2807]: W0124 00:50:36.793318 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:36.793328 kubelet[2807]: E0124 00:50:36.793325 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:36.809879 kubelet[2807]: E0124 00:50:36.809838 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:36.809879 kubelet[2807]: W0124 00:50:36.809863 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:36.809879 kubelet[2807]: E0124 00:50:36.809878 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:36.810071 kubelet[2807]: I0124 00:50:36.809905 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f3c105a9-c51e-4146-8f7b-cd2b2b3fff5c-socket-dir\") pod \"csi-node-driver-dbqvx\" (UID: \"f3c105a9-c51e-4146-8f7b-cd2b2b3fff5c\") " pod="calico-system/csi-node-driver-dbqvx" Jan 24 00:50:36.810185 kubelet[2807]: E0124 00:50:36.810162 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:36.810185 kubelet[2807]: W0124 00:50:36.810178 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:36.810232 kubelet[2807]: E0124 00:50:36.810188 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:36.810232 kubelet[2807]: I0124 00:50:36.810214 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcw9z\" (UniqueName: \"kubernetes.io/projected/f3c105a9-c51e-4146-8f7b-cd2b2b3fff5c-kube-api-access-gcw9z\") pod \"csi-node-driver-dbqvx\" (UID: \"f3c105a9-c51e-4146-8f7b-cd2b2b3fff5c\") " pod="calico-system/csi-node-driver-dbqvx" Jan 24 00:50:36.810683 kubelet[2807]: E0124 00:50:36.810661 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:36.810683 kubelet[2807]: W0124 00:50:36.810676 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:36.810745 kubelet[2807]: E0124 00:50:36.810686 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:36.810745 kubelet[2807]: I0124 00:50:36.810712 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f3c105a9-c51e-4146-8f7b-cd2b2b3fff5c-kubelet-dir\") pod \"csi-node-driver-dbqvx\" (UID: \"f3c105a9-c51e-4146-8f7b-cd2b2b3fff5c\") " pod="calico-system/csi-node-driver-dbqvx" Jan 24 00:50:36.811017 kubelet[2807]: E0124 00:50:36.810995 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:36.811017 kubelet[2807]: W0124 00:50:36.811010 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:36.811017 kubelet[2807]: E0124 00:50:36.811019 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:36.811102 kubelet[2807]: I0124 00:50:36.811044 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f3c105a9-c51e-4146-8f7b-cd2b2b3fff5c-registration-dir\") pod \"csi-node-driver-dbqvx\" (UID: \"f3c105a9-c51e-4146-8f7b-cd2b2b3fff5c\") " pod="calico-system/csi-node-driver-dbqvx" Jan 24 00:50:36.811289 kubelet[2807]: E0124 00:50:36.811266 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:36.811289 kubelet[2807]: W0124 00:50:36.811279 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:36.811289 kubelet[2807]: E0124 00:50:36.811288 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:36.811499 kubelet[2807]: I0124 00:50:36.811480 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/f3c105a9-c51e-4146-8f7b-cd2b2b3fff5c-varrun\") pod \"csi-node-driver-dbqvx\" (UID: \"f3c105a9-c51e-4146-8f7b-cd2b2b3fff5c\") " pod="calico-system/csi-node-driver-dbqvx" Jan 24 00:50:36.812092 kubelet[2807]: E0124 00:50:36.812048 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:36.812092 kubelet[2807]: W0124 00:50:36.812063 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:36.812092 kubelet[2807]: E0124 00:50:36.812071 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:36.812853 kubelet[2807]: E0124 00:50:36.812581 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:36.812853 kubelet[2807]: W0124 00:50:36.812596 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:36.812853 kubelet[2807]: E0124 00:50:36.812604 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:36.812947 kubelet[2807]: E0124 00:50:36.812901 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:36.812947 kubelet[2807]: W0124 00:50:36.812909 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:36.812947 kubelet[2807]: E0124 00:50:36.812917 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:36.813289 kubelet[2807]: E0124 00:50:36.813266 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:36.813289 kubelet[2807]: W0124 00:50:36.813282 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:36.813985 kubelet[2807]: E0124 00:50:36.813837 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:36.814222 kubelet[2807]: E0124 00:50:36.814198 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:36.814222 kubelet[2807]: W0124 00:50:36.814214 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:36.814222 kubelet[2807]: E0124 00:50:36.814224 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:36.815231 kubelet[2807]: E0124 00:50:36.814841 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:36.815231 kubelet[2807]: W0124 00:50:36.814853 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:36.815231 kubelet[2807]: E0124 00:50:36.814862 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:36.815339 kubelet[2807]: E0124 00:50:36.815297 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:36.815339 kubelet[2807]: W0124 00:50:36.815306 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:36.815339 kubelet[2807]: E0124 00:50:36.815315 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:36.815993 kubelet[2807]: E0124 00:50:36.815972 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:36.815993 kubelet[2807]: W0124 00:50:36.815989 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:36.816070 kubelet[2807]: E0124 00:50:36.815998 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:36.816320 kubelet[2807]: E0124 00:50:36.816297 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:36.816320 kubelet[2807]: W0124 00:50:36.816316 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:36.816373 kubelet[2807]: E0124 00:50:36.816325 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:36.816615 kubelet[2807]: E0124 00:50:36.816586 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:36.816615 kubelet[2807]: W0124 00:50:36.816602 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:36.816615 kubelet[2807]: E0124 00:50:36.816611 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:36.830000 audit: BPF prog-id=163 op=LOAD Jan 24 00:50:36.831000 audit: BPF prog-id=164 op=LOAD Jan 24 00:50:36.831000 audit[3235]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3223 pid=3235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:36.831000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564316261343233333734333135326530306639363635306165383064 Jan 24 00:50:36.831000 audit: BPF prog-id=164 op=UNLOAD Jan 24 00:50:36.831000 audit[3235]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3223 pid=3235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:36.831000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564316261343233333734333135326530306639363635306165383064 Jan 24 00:50:36.831000 audit: BPF prog-id=165 op=LOAD Jan 24 00:50:36.831000 audit[3235]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3223 pid=3235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:36.831000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564316261343233333734333135326530306639363635306165383064 Jan 24 00:50:36.832000 audit: BPF prog-id=166 op=LOAD Jan 24 00:50:36.832000 audit[3235]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3223 pid=3235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:36.832000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564316261343233333734333135326530306639363635306165383064 Jan 24 00:50:36.832000 audit: BPF prog-id=166 op=UNLOAD Jan 24 00:50:36.832000 audit[3235]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3223 pid=3235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:36.832000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564316261343233333734333135326530306639363635306165383064 Jan 24 00:50:36.832000 audit: BPF prog-id=165 op=UNLOAD Jan 24 00:50:36.832000 audit[3235]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3223 pid=3235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:36.832000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564316261343233333734333135326530306639363635306165383064 Jan 24 00:50:36.832000 audit: BPF prog-id=167 op=LOAD Jan 24 00:50:36.832000 audit[3235]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3223 pid=3235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:36.832000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564316261343233333734333135326530306639363635306165383064 Jan 24 00:50:36.840378 kubelet[2807]: E0124 00:50:36.840345 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 24 00:50:36.841071 containerd[1593]: time="2026-01-24T00:50:36.841040806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rvvw8,Uid:f1bafa18-b901-44af-bf4b-c41ce0187660,Namespace:calico-system,Attempt:0,}" Jan 24 00:50:36.867496 containerd[1593]: time="2026-01-24T00:50:36.867050633Z" level=info msg="connecting to shim b990dcc1a94ea7d53562a276e3c44c19667bfb2f3972b6ae2afe4bd74da7e0f1" address="unix:///run/containerd/s/acb5c19b390f7efdabd1364724fb0917a20607493b36d988a625a9c615735e15" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:50:36.905073 systemd[1]: Started cri-containerd-b990dcc1a94ea7d53562a276e3c44c19667bfb2f3972b6ae2afe4bd74da7e0f1.scope - libcontainer container b990dcc1a94ea7d53562a276e3c44c19667bfb2f3972b6ae2afe4bd74da7e0f1. Jan 24 00:50:36.912888 kubelet[2807]: E0124 00:50:36.912867 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:36.913085 kubelet[2807]: W0124 00:50:36.913018 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:36.913085 kubelet[2807]: E0124 00:50:36.913040 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:36.913755 kubelet[2807]: E0124 00:50:36.913743 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:36.913986 kubelet[2807]: W0124 00:50:36.913972 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:36.914256 kubelet[2807]: E0124 00:50:36.914156 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:36.914841 kubelet[2807]: E0124 00:50:36.914625 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:36.914841 kubelet[2807]: W0124 00:50:36.914637 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:36.914841 kubelet[2807]: E0124 00:50:36.914646 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:36.915076 kubelet[2807]: E0124 00:50:36.915063 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:36.915212 kubelet[2807]: W0124 00:50:36.915114 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:36.915212 kubelet[2807]: E0124 00:50:36.915126 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:36.915507 kubelet[2807]: E0124 00:50:36.915487 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:36.916003 kubelet[2807]: W0124 00:50:36.915587 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:36.916003 kubelet[2807]: E0124 00:50:36.915602 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:36.916856 kubelet[2807]: E0124 00:50:36.916821 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:36.916856 kubelet[2807]: W0124 00:50:36.916834 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:36.916856 kubelet[2807]: E0124 00:50:36.916843 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:36.918046 kubelet[2807]: E0124 00:50:36.917850 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:36.918046 kubelet[2807]: W0124 00:50:36.917863 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:36.918046 kubelet[2807]: E0124 00:50:36.917872 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:36.918365 kubelet[2807]: E0124 00:50:36.918195 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:36.918365 kubelet[2807]: W0124 00:50:36.918206 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:36.918365 kubelet[2807]: E0124 00:50:36.918214 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:36.918612 kubelet[2807]: E0124 00:50:36.918600 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:36.918682 kubelet[2807]: W0124 00:50:36.918671 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:36.918849 kubelet[2807]: E0124 00:50:36.918837 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:36.919275 kubelet[2807]: E0124 00:50:36.919264 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:36.919449 kubelet[2807]: W0124 00:50:36.919327 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:36.919449 kubelet[2807]: E0124 00:50:36.919340 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:36.919932 kubelet[2807]: E0124 00:50:36.919913 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:36.920034 kubelet[2807]: W0124 00:50:36.920009 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:36.920034 kubelet[2807]: E0124 00:50:36.920022 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:36.920567 kubelet[2807]: E0124 00:50:36.920520 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:36.920692 kubelet[2807]: W0124 00:50:36.920619 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:36.920775 kubelet[2807]: E0124 00:50:36.920757 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:36.922504 kubelet[2807]: E0124 00:50:36.921978 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:36.922504 kubelet[2807]: W0124 00:50:36.922240 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:36.922504 kubelet[2807]: E0124 00:50:36.922256 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:36.923130 kubelet[2807]: E0124 00:50:36.923010 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:36.923130 kubelet[2807]: W0124 00:50:36.923024 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:36.923130 kubelet[2807]: E0124 00:50:36.923034 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:36.923990 kubelet[2807]: E0124 00:50:36.923955 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:36.923990 kubelet[2807]: W0124 00:50:36.923968 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:36.923990 kubelet[2807]: E0124 00:50:36.923977 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:36.924928 kubelet[2807]: E0124 00:50:36.924890 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:36.924928 kubelet[2807]: W0124 00:50:36.924902 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:36.924928 kubelet[2807]: E0124 00:50:36.924914 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:36.925993 kubelet[2807]: E0124 00:50:36.925948 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:36.925993 kubelet[2807]: W0124 00:50:36.925960 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:36.925993 kubelet[2807]: E0124 00:50:36.925969 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:36.926963 kubelet[2807]: E0124 00:50:36.926932 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:36.926963 kubelet[2807]: W0124 00:50:36.926943 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:36.926963 kubelet[2807]: E0124 00:50:36.926951 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:36.927809 kubelet[2807]: E0124 00:50:36.927671 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:36.927809 kubelet[2807]: W0124 00:50:36.927682 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:36.927809 kubelet[2807]: E0124 00:50:36.927691 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:36.928247 kubelet[2807]: E0124 00:50:36.928173 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:36.928247 kubelet[2807]: W0124 00:50:36.928184 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:36.928247 kubelet[2807]: E0124 00:50:36.928193 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:36.927000 audit: BPF prog-id=168 op=LOAD Jan 24 00:50:36.929353 kubelet[2807]: E0124 00:50:36.929185 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:36.929353 kubelet[2807]: W0124 00:50:36.929196 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:36.929353 kubelet[2807]: E0124 00:50:36.929205 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:36.930216 kubelet[2807]: E0124 00:50:36.929999 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:36.930216 kubelet[2807]: W0124 00:50:36.930010 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:36.930216 kubelet[2807]: E0124 00:50:36.930021 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:36.930809 kubelet[2807]: E0124 00:50:36.930601 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:36.930809 kubelet[2807]: W0124 00:50:36.930612 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:36.930809 kubelet[2807]: E0124 00:50:36.930620 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:36.930979 kubelet[2807]: E0124 00:50:36.930968 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:36.931073 kubelet[2807]: W0124 00:50:36.931060 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:36.931817 kubelet[2807]: E0124 00:50:36.931115 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:36.932037 kubelet[2807]: E0124 00:50:36.932026 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:36.932095 kubelet[2807]: W0124 00:50:36.932083 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:36.932141 kubelet[2807]: E0124 00:50:36.932132 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:36.943622 kubelet[2807]: E0124 00:50:36.943606 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:36.943714 kubelet[2807]: W0124 00:50:36.943701 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:36.943764 kubelet[2807]: E0124 00:50:36.943753 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:36.950393 kernel: kauditd_printk_skb: 41 callbacks suppressed Jan 24 00:50:36.950453 kernel: audit: type=1334 audit(1769215836.927:560): prog-id=168 op=LOAD Jan 24 00:50:36.929000 audit: BPF prog-id=169 op=LOAD Jan 24 00:50:36.964839 kernel: audit: type=1334 audit(1769215836.929:561): prog-id=169 op=LOAD Jan 24 00:50:36.964929 kernel: audit: type=1300 audit(1769215836.929:561): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3308 pid=3319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:36.929000 audit[3319]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3308 pid=3319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:36.965041 kubelet[2807]: E0124 00:50:36.959085 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 24 00:50:36.965092 containerd[1593]: time="2026-01-24T00:50:36.957221700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-64cb8b8766-jdzjs,Uid:5b71e833-2f0f-4e21-8fb9-d106454e0413,Namespace:calico-system,Attempt:0,} returns sandbox id \"5d1ba4233743152e00f96650ae80dbe024213fe94f0d4a7a7048dc7a4870a499\"" Jan 24 00:50:36.965092 containerd[1593]: time="2026-01-24T00:50:36.962618061Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 24 00:50:36.929000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239393064636331613934656137643533353632613237366533633434 Jan 24 00:50:36.972821 kernel: audit: type=1327 audit(1769215836.929:561): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239393064636331613934656137643533353632613237366533633434 Jan 24 00:50:36.929000 audit: BPF prog-id=169 op=UNLOAD Jan 24 00:50:36.982248 kernel: audit: type=1334 audit(1769215836.929:562): prog-id=169 op=UNLOAD Jan 24 00:50:36.982288 kernel: audit: type=1300 audit(1769215836.929:562): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3308 pid=3319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:36.929000 audit[3319]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3308 pid=3319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:36.929000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239393064636331613934656137643533353632613237366533633434 Jan 24 00:50:36.984033 kernel: audit: type=1327 audit(1769215836.929:562): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239393064636331613934656137643533353632613237366533633434 Jan 24 00:50:36.991957 kernel: audit: type=1334 audit(1769215836.930:563): prog-id=170 op=LOAD Jan 24 00:50:36.930000 audit: BPF prog-id=170 op=LOAD Jan 24 00:50:37.000331 kernel: audit: type=1300 audit(1769215836.930:563): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3308 pid=3319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:36.930000 audit[3319]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3308 pid=3319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:37.011117 kernel: audit: type=1327 audit(1769215836.930:563): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239393064636331613934656137643533353632613237366533633434 Jan 24 00:50:36.930000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239393064636331613934656137643533353632613237366533633434 Jan 24 00:50:37.011263 kubelet[2807]: E0124 00:50:37.002128 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 24 00:50:37.011325 containerd[1593]: time="2026-01-24T00:50:37.001426316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rvvw8,Uid:f1bafa18-b901-44af-bf4b-c41ce0187660,Namespace:calico-system,Attempt:0,} returns sandbox id \"b990dcc1a94ea7d53562a276e3c44c19667bfb2f3972b6ae2afe4bd74da7e0f1\"" Jan 24 00:50:36.930000 audit: BPF prog-id=171 op=LOAD Jan 24 00:50:36.930000 audit[3319]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3308 pid=3319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:36.930000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239393064636331613934656137643533353632613237366533633434 Jan 24 00:50:36.930000 audit: BPF prog-id=171 op=UNLOAD Jan 24 00:50:36.930000 audit[3319]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3308 pid=3319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:36.930000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239393064636331613934656137643533353632613237366533633434 Jan 24 00:50:36.930000 audit: BPF prog-id=170 op=UNLOAD Jan 24 00:50:36.930000 audit[3319]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3308 pid=3319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:36.930000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239393064636331613934656137643533353632613237366533633434 Jan 24 00:50:36.930000 audit: BPF prog-id=172 op=LOAD Jan 24 00:50:36.930000 audit[3319]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3308 pid=3319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:36.930000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239393064636331613934656137643533353632613237366533633434 Jan 24 00:50:37.345000 audit[3379]: NETFILTER_CFG table=filter:113 family=2 entries=22 op=nft_register_rule pid=3379 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:50:37.345000 audit[3379]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fff366ca5e0 a2=0 a3=7fff366ca5cc items=0 ppid=2920 pid=3379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:37.345000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:50:37.347000 audit[3379]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3379 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:50:37.347000 audit[3379]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff366ca5e0 a2=0 a3=0 items=0 ppid=2920 pid=3379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:37.347000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:50:37.389408 update_engine[1570]: I20260124 00:50:37.389317 1570 update_attempter.cc:509] Updating boot flags... Jan 24 00:50:37.590137 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1315892741.mount: Deactivated successfully. Jan 24 00:50:38.341364 containerd[1593]: time="2026-01-24T00:50:38.341322312Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:50:38.342311 containerd[1593]: time="2026-01-24T00:50:38.342287194Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Jan 24 00:50:38.343821 containerd[1593]: time="2026-01-24T00:50:38.342899235Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:50:38.344643 containerd[1593]: time="2026-01-24T00:50:38.344598519Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:50:38.345276 containerd[1593]: time="2026-01-24T00:50:38.345180249Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 1.382540268s" Jan 24 00:50:38.345276 containerd[1593]: time="2026-01-24T00:50:38.345208310Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 24 00:50:38.346315 containerd[1593]: time="2026-01-24T00:50:38.346293212Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 24 00:50:38.364582 containerd[1593]: time="2026-01-24T00:50:38.364541648Z" level=info msg="CreateContainer within sandbox \"5d1ba4233743152e00f96650ae80dbe024213fe94f0d4a7a7048dc7a4870a499\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 24 00:50:38.369662 containerd[1593]: time="2026-01-24T00:50:38.369628698Z" level=info msg="Container 23dd41cd1d8087dd28f6278b269491ae966a78031a6b8be484694bfb62709a07: CDI devices from CRI Config.CDIDevices: []" Jan 24 00:50:38.374333 containerd[1593]: time="2026-01-24T00:50:38.374307707Z" level=info msg="CreateContainer within sandbox \"5d1ba4233743152e00f96650ae80dbe024213fe94f0d4a7a7048dc7a4870a499\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"23dd41cd1d8087dd28f6278b269491ae966a78031a6b8be484694bfb62709a07\"" Jan 24 00:50:38.375246 containerd[1593]: time="2026-01-24T00:50:38.375227279Z" level=info msg="StartContainer for \"23dd41cd1d8087dd28f6278b269491ae966a78031a6b8be484694bfb62709a07\"" Jan 24 00:50:38.376739 containerd[1593]: time="2026-01-24T00:50:38.376676102Z" level=info msg="connecting to shim 23dd41cd1d8087dd28f6278b269491ae966a78031a6b8be484694bfb62709a07" address="unix:///run/containerd/s/70c5e92ed900b383e590bf25c3ac95634d557f38793c3db9c06fa542f6950504" protocol=ttrpc version=3 Jan 24 00:50:38.397949 systemd[1]: Started cri-containerd-23dd41cd1d8087dd28f6278b269491ae966a78031a6b8be484694bfb62709a07.scope - libcontainer container 23dd41cd1d8087dd28f6278b269491ae966a78031a6b8be484694bfb62709a07. Jan 24 00:50:38.411000 audit: BPF prog-id=173 op=LOAD Jan 24 00:50:38.411000 audit: BPF prog-id=174 op=LOAD Jan 24 00:50:38.411000 audit[3414]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3223 pid=3414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:38.411000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233646434316364316438303837646432386636323738623236393439 Jan 24 00:50:38.411000 audit: BPF prog-id=174 op=UNLOAD Jan 24 00:50:38.411000 audit[3414]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3223 pid=3414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:38.411000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233646434316364316438303837646432386636323738623236393439 Jan 24 00:50:38.412000 audit: BPF prog-id=175 op=LOAD Jan 24 00:50:38.412000 audit[3414]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3223 pid=3414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:38.412000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233646434316364316438303837646432386636323738623236393439 Jan 24 00:50:38.412000 audit: BPF prog-id=176 op=LOAD Jan 24 00:50:38.412000 audit[3414]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3223 pid=3414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:38.412000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233646434316364316438303837646432386636323738623236393439 Jan 24 00:50:38.412000 audit: BPF prog-id=176 op=UNLOAD Jan 24 00:50:38.412000 audit[3414]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3223 pid=3414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:38.412000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233646434316364316438303837646432386636323738623236393439 Jan 24 00:50:38.412000 audit: BPF prog-id=175 op=UNLOAD Jan 24 00:50:38.412000 audit[3414]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3223 pid=3414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:38.412000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233646434316364316438303837646432386636323738623236393439 Jan 24 00:50:38.412000 audit: BPF prog-id=177 op=LOAD Jan 24 00:50:38.412000 audit[3414]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3223 pid=3414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:38.412000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233646434316364316438303837646432386636323738623236393439 Jan 24 00:50:38.454360 containerd[1593]: time="2026-01-24T00:50:38.454321106Z" level=info msg="StartContainer for \"23dd41cd1d8087dd28f6278b269491ae966a78031a6b8be484694bfb62709a07\" returns successfully" Jan 24 00:50:38.698737 kubelet[2807]: E0124 00:50:38.697964 2807 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dbqvx" podUID="f3c105a9-c51e-4146-8f7b-cd2b2b3fff5c" Jan 24 00:50:38.768636 kubelet[2807]: E0124 00:50:38.768332 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 24 00:50:38.783750 kubelet[2807]: I0124 00:50:38.783697 2807 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-64cb8b8766-jdzjs" podStartSLOduration=1.399409423 podStartE2EDuration="2.783684746s" podCreationTimestamp="2026-01-24 00:50:36 +0000 UTC" firstStartedPulling="2026-01-24 00:50:36.961910879 +0000 UTC m=+18.362622695" lastFinishedPulling="2026-01-24 00:50:38.346186192 +0000 UTC m=+19.746898018" observedRunningTime="2026-01-24 00:50:38.783200605 +0000 UTC m=+20.183912421" watchObservedRunningTime="2026-01-24 00:50:38.783684746 +0000 UTC m=+20.184396562" Jan 24 00:50:38.806770 kubelet[2807]: E0124 00:50:38.806735 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:38.806770 kubelet[2807]: W0124 00:50:38.806756 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:38.806770 kubelet[2807]: E0124 00:50:38.806773 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:38.807009 kubelet[2807]: E0124 00:50:38.806992 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:38.807009 kubelet[2807]: W0124 00:50:38.807007 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:38.807082 kubelet[2807]: E0124 00:50:38.807015 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:38.807251 kubelet[2807]: E0124 00:50:38.807194 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:38.807251 kubelet[2807]: W0124 00:50:38.807207 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:38.807251 kubelet[2807]: E0124 00:50:38.807215 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:38.807436 kubelet[2807]: E0124 00:50:38.807423 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:38.807436 kubelet[2807]: W0124 00:50:38.807431 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:38.807571 kubelet[2807]: E0124 00:50:38.807439 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:38.807655 kubelet[2807]: E0124 00:50:38.807607 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:38.807655 kubelet[2807]: W0124 00:50:38.807616 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:38.807655 kubelet[2807]: E0124 00:50:38.807624 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:38.807842 kubelet[2807]: E0124 00:50:38.807813 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:38.807842 kubelet[2807]: W0124 00:50:38.807821 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:38.807842 kubelet[2807]: E0124 00:50:38.807829 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:38.808122 kubelet[2807]: E0124 00:50:38.808003 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:38.808122 kubelet[2807]: W0124 00:50:38.808010 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:38.808122 kubelet[2807]: E0124 00:50:38.808019 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:38.808423 kubelet[2807]: E0124 00:50:38.808197 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:38.808423 kubelet[2807]: W0124 00:50:38.808204 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:38.808423 kubelet[2807]: E0124 00:50:38.808229 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:38.808734 kubelet[2807]: E0124 00:50:38.808719 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:38.808769 kubelet[2807]: W0124 00:50:38.808757 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:38.808834 kubelet[2807]: E0124 00:50:38.808769 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:38.809042 kubelet[2807]: E0124 00:50:38.809015 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:38.809042 kubelet[2807]: W0124 00:50:38.809026 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:38.809042 kubelet[2807]: E0124 00:50:38.809034 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:38.809462 kubelet[2807]: E0124 00:50:38.809403 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:38.809462 kubelet[2807]: W0124 00:50:38.809417 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:38.809462 kubelet[2807]: E0124 00:50:38.809426 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:38.809643 kubelet[2807]: E0124 00:50:38.809588 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:38.809643 kubelet[2807]: W0124 00:50:38.809596 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:38.809643 kubelet[2807]: E0124 00:50:38.809604 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:38.809872 kubelet[2807]: E0124 00:50:38.809762 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:38.809872 kubelet[2807]: W0124 00:50:38.809769 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:38.809872 kubelet[2807]: E0124 00:50:38.809778 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:38.810022 kubelet[2807]: E0124 00:50:38.810003 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:38.810022 kubelet[2807]: W0124 00:50:38.810017 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:38.810022 kubelet[2807]: E0124 00:50:38.810025 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:38.811406 kubelet[2807]: E0124 00:50:38.811385 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:38.811406 kubelet[2807]: W0124 00:50:38.811400 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:38.811471 kubelet[2807]: E0124 00:50:38.811409 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:38.835848 kubelet[2807]: E0124 00:50:38.835787 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:38.835848 kubelet[2807]: W0124 00:50:38.835846 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:38.835964 kubelet[2807]: E0124 00:50:38.835863 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:38.836223 kubelet[2807]: E0124 00:50:38.836209 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:38.836223 kubelet[2807]: W0124 00:50:38.836221 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:38.836283 kubelet[2807]: E0124 00:50:38.836230 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:38.836583 kubelet[2807]: E0124 00:50:38.836569 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:38.836612 kubelet[2807]: W0124 00:50:38.836582 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:38.836612 kubelet[2807]: E0124 00:50:38.836605 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:38.837007 kubelet[2807]: E0124 00:50:38.836993 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:38.837007 kubelet[2807]: W0124 00:50:38.837006 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:38.837077 kubelet[2807]: E0124 00:50:38.837015 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:38.837881 kubelet[2807]: E0124 00:50:38.837866 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:38.837881 kubelet[2807]: W0124 00:50:38.837879 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:38.837938 kubelet[2807]: E0124 00:50:38.837888 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:38.838134 kubelet[2807]: E0124 00:50:38.838120 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:38.838134 kubelet[2807]: W0124 00:50:38.838132 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:38.838241 kubelet[2807]: E0124 00:50:38.838141 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:38.838391 kubelet[2807]: E0124 00:50:38.838378 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:38.838391 kubelet[2807]: W0124 00:50:38.838390 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:38.838440 kubelet[2807]: E0124 00:50:38.838398 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:38.838634 kubelet[2807]: E0124 00:50:38.838621 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:38.838634 kubelet[2807]: W0124 00:50:38.838633 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:38.838684 kubelet[2807]: E0124 00:50:38.838641 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:38.838919 kubelet[2807]: E0124 00:50:38.838905 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:38.838919 kubelet[2807]: W0124 00:50:38.838917 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:38.838964 kubelet[2807]: E0124 00:50:38.838925 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:38.839419 kubelet[2807]: E0124 00:50:38.839398 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:38.839630 kubelet[2807]: W0124 00:50:38.839602 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:38.839630 kubelet[2807]: E0124 00:50:38.839610 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:38.839895 kubelet[2807]: E0124 00:50:38.839875 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:38.839895 kubelet[2807]: W0124 00:50:38.839890 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:38.839949 kubelet[2807]: E0124 00:50:38.839899 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:38.840141 kubelet[2807]: E0124 00:50:38.840118 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:38.840141 kubelet[2807]: W0124 00:50:38.840136 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:38.840194 kubelet[2807]: E0124 00:50:38.840144 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:38.840549 kubelet[2807]: E0124 00:50:38.840536 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:38.840549 kubelet[2807]: W0124 00:50:38.840548 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:38.840593 kubelet[2807]: E0124 00:50:38.840555 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:38.840843 kubelet[2807]: E0124 00:50:38.840828 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:38.840882 kubelet[2807]: W0124 00:50:38.840842 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:38.840882 kubelet[2807]: E0124 00:50:38.840856 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:38.841118 kubelet[2807]: E0124 00:50:38.841104 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:38.841118 kubelet[2807]: W0124 00:50:38.841117 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:38.841168 kubelet[2807]: E0124 00:50:38.841126 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:38.841318 kubelet[2807]: E0124 00:50:38.841306 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:38.841318 kubelet[2807]: W0124 00:50:38.841316 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:38.841371 kubelet[2807]: E0124 00:50:38.841325 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:38.841572 kubelet[2807]: E0124 00:50:38.841559 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:38.841572 kubelet[2807]: W0124 00:50:38.841570 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:38.841620 kubelet[2807]: E0124 00:50:38.841579 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:38.841908 kubelet[2807]: E0124 00:50:38.841894 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:50:38.841908 kubelet[2807]: W0124 00:50:38.841906 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:50:38.841968 kubelet[2807]: E0124 00:50:38.841914 2807 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:50:39.014299 containerd[1593]: time="2026-01-24T00:50:39.014141280Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:50:39.015647 containerd[1593]: time="2026-01-24T00:50:39.015609243Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Jan 24 00:50:39.017705 containerd[1593]: time="2026-01-24T00:50:39.017659597Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:50:39.018767 containerd[1593]: time="2026-01-24T00:50:39.018712819Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:50:39.019636 containerd[1593]: time="2026-01-24T00:50:39.019157120Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 672.836368ms" Jan 24 00:50:39.019636 containerd[1593]: time="2026-01-24T00:50:39.019191260Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 24 00:50:39.024642 containerd[1593]: time="2026-01-24T00:50:39.024587510Z" level=info msg="CreateContainer within sandbox \"b990dcc1a94ea7d53562a276e3c44c19667bfb2f3972b6ae2afe4bd74da7e0f1\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 24 00:50:39.035940 containerd[1593]: time="2026-01-24T00:50:39.035889681Z" level=info msg="Container 42ef3fb98d55feff560b24f68f33f042b1e2680456becbed3d11ef09bdb88679: CDI devices from CRI Config.CDIDevices: []" Jan 24 00:50:39.043079 containerd[1593]: time="2026-01-24T00:50:39.043034285Z" level=info msg="CreateContainer within sandbox \"b990dcc1a94ea7d53562a276e3c44c19667bfb2f3972b6ae2afe4bd74da7e0f1\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"42ef3fb98d55feff560b24f68f33f042b1e2680456becbed3d11ef09bdb88679\"" Jan 24 00:50:39.044113 containerd[1593]: time="2026-01-24T00:50:39.044078306Z" level=info msg="StartContainer for \"42ef3fb98d55feff560b24f68f33f042b1e2680456becbed3d11ef09bdb88679\"" Jan 24 00:50:39.046314 containerd[1593]: time="2026-01-24T00:50:39.046263580Z" level=info msg="connecting to shim 42ef3fb98d55feff560b24f68f33f042b1e2680456becbed3d11ef09bdb88679" address="unix:///run/containerd/s/acb5c19b390f7efdabd1364724fb0917a20607493b36d988a625a9c615735e15" protocol=ttrpc version=3 Jan 24 00:50:39.072157 systemd[1]: Started cri-containerd-42ef3fb98d55feff560b24f68f33f042b1e2680456becbed3d11ef09bdb88679.scope - libcontainer container 42ef3fb98d55feff560b24f68f33f042b1e2680456becbed3d11ef09bdb88679. Jan 24 00:50:39.128000 audit: BPF prog-id=178 op=LOAD Jan 24 00:50:39.128000 audit[3490]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3308 pid=3490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:39.128000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432656633666239386435356665666635363062323466363866333366 Jan 24 00:50:39.129000 audit: BPF prog-id=179 op=LOAD Jan 24 00:50:39.129000 audit[3490]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3308 pid=3490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:39.129000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432656633666239386435356665666635363062323466363866333366 Jan 24 00:50:39.129000 audit: BPF prog-id=179 op=UNLOAD Jan 24 00:50:39.129000 audit[3490]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3308 pid=3490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:39.129000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432656633666239386435356665666635363062323466363866333366 Jan 24 00:50:39.129000 audit: BPF prog-id=178 op=UNLOAD Jan 24 00:50:39.129000 audit[3490]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3308 pid=3490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:39.129000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432656633666239386435356665666635363062323466363866333366 Jan 24 00:50:39.129000 audit: BPF prog-id=180 op=LOAD Jan 24 00:50:39.129000 audit[3490]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3308 pid=3490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:39.129000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432656633666239386435356665666635363062323466363866333366 Jan 24 00:50:39.154164 containerd[1593]: time="2026-01-24T00:50:39.154116104Z" level=info msg="StartContainer for \"42ef3fb98d55feff560b24f68f33f042b1e2680456becbed3d11ef09bdb88679\" returns successfully" Jan 24 00:50:39.173563 systemd[1]: cri-containerd-42ef3fb98d55feff560b24f68f33f042b1e2680456becbed3d11ef09bdb88679.scope: Deactivated successfully. Jan 24 00:50:39.179000 audit: BPF prog-id=180 op=UNLOAD Jan 24 00:50:39.181927 containerd[1593]: time="2026-01-24T00:50:39.181898236Z" level=info msg="received container exit event container_id:\"42ef3fb98d55feff560b24f68f33f042b1e2680456becbed3d11ef09bdb88679\" id:\"42ef3fb98d55feff560b24f68f33f042b1e2680456becbed3d11ef09bdb88679\" pid:3504 exited_at:{seconds:1769215839 nanos:180976975}" Jan 24 00:50:39.209564 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-42ef3fb98d55feff560b24f68f33f042b1e2680456becbed3d11ef09bdb88679-rootfs.mount: Deactivated successfully. Jan 24 00:50:39.775608 kubelet[2807]: I0124 00:50:39.775557 2807 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 00:50:39.778102 kubelet[2807]: E0124 00:50:39.776485 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 24 00:50:39.778102 kubelet[2807]: E0124 00:50:39.777884 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 24 00:50:39.779400 containerd[1593]: time="2026-01-24T00:50:39.779100332Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 24 00:50:40.697725 kubelet[2807]: E0124 00:50:40.697669 2807 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dbqvx" podUID="f3c105a9-c51e-4146-8f7b-cd2b2b3fff5c" Jan 24 00:50:41.519108 containerd[1593]: time="2026-01-24T00:50:41.519066627Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:50:41.520022 containerd[1593]: time="2026-01-24T00:50:41.519836987Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70445002" Jan 24 00:50:41.520750 containerd[1593]: time="2026-01-24T00:50:41.520720610Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:50:41.523784 containerd[1593]: time="2026-01-24T00:50:41.523750574Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:50:41.525225 containerd[1593]: time="2026-01-24T00:50:41.525193757Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 1.746056135s" Jan 24 00:50:41.525225 containerd[1593]: time="2026-01-24T00:50:41.525224587Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 24 00:50:41.531734 containerd[1593]: time="2026-01-24T00:50:41.531682208Z" level=info msg="CreateContainer within sandbox \"b990dcc1a94ea7d53562a276e3c44c19667bfb2f3972b6ae2afe4bd74da7e0f1\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 24 00:50:41.549205 containerd[1593]: time="2026-01-24T00:50:41.548009536Z" level=info msg="Container 47b20e7c10a6456a2c36c7c300658a857bf2b2319c503e199c152214cafaa365: CDI devices from CRI Config.CDIDevices: []" Jan 24 00:50:41.557700 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2485833419.mount: Deactivated successfully. Jan 24 00:50:41.559187 containerd[1593]: time="2026-01-24T00:50:41.559130335Z" level=info msg="CreateContainer within sandbox \"b990dcc1a94ea7d53562a276e3c44c19667bfb2f3972b6ae2afe4bd74da7e0f1\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"47b20e7c10a6456a2c36c7c300658a857bf2b2319c503e199c152214cafaa365\"" Jan 24 00:50:41.561057 containerd[1593]: time="2026-01-24T00:50:41.560941349Z" level=info msg="StartContainer for \"47b20e7c10a6456a2c36c7c300658a857bf2b2319c503e199c152214cafaa365\"" Jan 24 00:50:41.564328 containerd[1593]: time="2026-01-24T00:50:41.563700963Z" level=info msg="connecting to shim 47b20e7c10a6456a2c36c7c300658a857bf2b2319c503e199c152214cafaa365" address="unix:///run/containerd/s/acb5c19b390f7efdabd1364724fb0917a20607493b36d988a625a9c615735e15" protocol=ttrpc version=3 Jan 24 00:50:41.597013 systemd[1]: Started cri-containerd-47b20e7c10a6456a2c36c7c300658a857bf2b2319c503e199c152214cafaa365.scope - libcontainer container 47b20e7c10a6456a2c36c7c300658a857bf2b2319c503e199c152214cafaa365. Jan 24 00:50:41.650000 audit: BPF prog-id=181 op=LOAD Jan 24 00:50:41.650000 audit[3547]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=3308 pid=3547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:41.650000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437623230653763313061363435366132633336633763333030363538 Jan 24 00:50:41.650000 audit: BPF prog-id=182 op=LOAD Jan 24 00:50:41.650000 audit[3547]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=3308 pid=3547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:41.650000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437623230653763313061363435366132633336633763333030363538 Jan 24 00:50:41.650000 audit: BPF prog-id=182 op=UNLOAD Jan 24 00:50:41.650000 audit[3547]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3308 pid=3547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:41.650000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437623230653763313061363435366132633336633763333030363538 Jan 24 00:50:41.651000 audit: BPF prog-id=181 op=UNLOAD Jan 24 00:50:41.651000 audit[3547]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3308 pid=3547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:41.651000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437623230653763313061363435366132633336633763333030363538 Jan 24 00:50:41.651000 audit: BPF prog-id=183 op=LOAD Jan 24 00:50:41.651000 audit[3547]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=3308 pid=3547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:41.651000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437623230653763313061363435366132633336633763333030363538 Jan 24 00:50:41.677173 containerd[1593]: time="2026-01-24T00:50:41.677142048Z" level=info msg="StartContainer for \"47b20e7c10a6456a2c36c7c300658a857bf2b2319c503e199c152214cafaa365\" returns successfully" Jan 24 00:50:41.792755 kubelet[2807]: E0124 00:50:41.792633 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 24 00:50:42.211250 containerd[1593]: time="2026-01-24T00:50:42.211122209Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 24 00:50:42.214985 systemd[1]: cri-containerd-47b20e7c10a6456a2c36c7c300658a857bf2b2319c503e199c152214cafaa365.scope: Deactivated successfully. Jan 24 00:50:42.215341 systemd[1]: cri-containerd-47b20e7c10a6456a2c36c7c300658a857bf2b2319c503e199c152214cafaa365.scope: Consumed 565ms CPU time, 191.5M memory peak, 171.3M written to disk. Jan 24 00:50:42.216689 containerd[1593]: time="2026-01-24T00:50:42.216669428Z" level=info msg="received container exit event container_id:\"47b20e7c10a6456a2c36c7c300658a857bf2b2319c503e199c152214cafaa365\" id:\"47b20e7c10a6456a2c36c7c300658a857bf2b2319c503e199c152214cafaa365\" pid:3559 exited_at:{seconds:1769215842 nanos:216221827}" Jan 24 00:50:42.217000 audit: BPF prog-id=183 op=UNLOAD Jan 24 00:50:42.219271 kernel: kauditd_printk_skb: 71 callbacks suppressed Jan 24 00:50:42.219330 kernel: audit: type=1334 audit(1769215842.217:589): prog-id=183 op=UNLOAD Jan 24 00:50:42.222598 kubelet[2807]: I0124 00:50:42.221051 2807 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Jan 24 00:50:42.259520 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-47b20e7c10a6456a2c36c7c300658a857bf2b2319c503e199c152214cafaa365-rootfs.mount: Deactivated successfully. Jan 24 00:50:42.289673 systemd[1]: Created slice kubepods-besteffort-pod8e3373a2_955e_4988_8f5b_bfdd560d0dba.slice - libcontainer container kubepods-besteffort-pod8e3373a2_955e_4988_8f5b_bfdd560d0dba.slice. Jan 24 00:50:42.313749 systemd[1]: Created slice kubepods-burstable-podb78a6970_784f_4d8d_b2e0_d7d30137d78a.slice - libcontainer container kubepods-burstable-podb78a6970_784f_4d8d_b2e0_d7d30137d78a.slice. Jan 24 00:50:42.330788 systemd[1]: Created slice kubepods-besteffort-pod7daaf6bf_447d_4008_8095_834f38ae42be.slice - libcontainer container kubepods-besteffort-pod7daaf6bf_447d_4008_8095_834f38ae42be.slice. Jan 24 00:50:42.344552 systemd[1]: Created slice kubepods-burstable-podb9b2bf92_1ff5_4c12_9c42_3933e6be5da9.slice - libcontainer container kubepods-burstable-podb9b2bf92_1ff5_4c12_9c42_3933e6be5da9.slice. Jan 24 00:50:42.354121 systemd[1]: Created slice kubepods-besteffort-pod7db270eb_ad67_4a41_9877_6d27ca50c210.slice - libcontainer container kubepods-besteffort-pod7db270eb_ad67_4a41_9877_6d27ca50c210.slice. Jan 24 00:50:42.364679 kubelet[2807]: I0124 00:50:42.364619 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7db270eb-ad67-4a41-9877-6d27ca50c210-calico-apiserver-certs\") pod \"calico-apiserver-588dc98d7b-pw4d6\" (UID: \"7db270eb-ad67-4a41-9877-6d27ca50c210\") " pod="calico-apiserver/calico-apiserver-588dc98d7b-pw4d6" Jan 24 00:50:42.364679 kubelet[2807]: I0124 00:50:42.364653 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed711851-0b36-4791-909e-763bb4210e62-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-rczw4\" (UID: \"ed711851-0b36-4791-909e-763bb4210e62\") " pod="calico-system/goldmane-7c778bb748-rczw4" Jan 24 00:50:42.364679 kubelet[2807]: I0124 00:50:42.364674 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e3373a2-955e-4988-8f5b-bfdd560d0dba-tigera-ca-bundle\") pod \"calico-kube-controllers-7b94bdbd9d-dvkkh\" (UID: \"8e3373a2-955e-4988-8f5b-bfdd560d0dba\") " pod="calico-system/calico-kube-controllers-7b94bdbd9d-dvkkh" Jan 24 00:50:42.364945 kubelet[2807]: I0124 00:50:42.364688 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tn6gn\" (UniqueName: \"kubernetes.io/projected/8e3373a2-955e-4988-8f5b-bfdd560d0dba-kube-api-access-tn6gn\") pod \"calico-kube-controllers-7b94bdbd9d-dvkkh\" (UID: \"8e3373a2-955e-4988-8f5b-bfdd560d0dba\") " pod="calico-system/calico-kube-controllers-7b94bdbd9d-dvkkh" Jan 24 00:50:42.364945 kubelet[2807]: I0124 00:50:42.364704 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b78a6970-784f-4d8d-b2e0-d7d30137d78a-config-volume\") pod \"coredns-66bc5c9577-qc8gl\" (UID: \"b78a6970-784f-4d8d-b2e0-d7d30137d78a\") " pod="kube-system/coredns-66bc5c9577-qc8gl" Jan 24 00:50:42.364945 kubelet[2807]: I0124 00:50:42.364717 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/ed711851-0b36-4791-909e-763bb4210e62-goldmane-key-pair\") pod \"goldmane-7c778bb748-rczw4\" (UID: \"ed711851-0b36-4791-909e-763bb4210e62\") " pod="calico-system/goldmane-7c778bb748-rczw4" Jan 24 00:50:42.364945 kubelet[2807]: I0124 00:50:42.364732 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/dfe7191c-2a43-4c85-8819-687d02e5b78b-whisker-backend-key-pair\") pod \"whisker-57f46f995f-xhj6p\" (UID: \"dfe7191c-2a43-4c85-8819-687d02e5b78b\") " pod="calico-system/whisker-57f46f995f-xhj6p" Jan 24 00:50:42.364945 kubelet[2807]: I0124 00:50:42.364745 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dfe7191c-2a43-4c85-8819-687d02e5b78b-whisker-ca-bundle\") pod \"whisker-57f46f995f-xhj6p\" (UID: \"dfe7191c-2a43-4c85-8819-687d02e5b78b\") " pod="calico-system/whisker-57f46f995f-xhj6p" Jan 24 00:50:42.365062 kubelet[2807]: I0124 00:50:42.364758 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7daaf6bf-447d-4008-8095-834f38ae42be-calico-apiserver-certs\") pod \"calico-apiserver-588dc98d7b-7slqw\" (UID: \"7daaf6bf-447d-4008-8095-834f38ae42be\") " pod="calico-apiserver/calico-apiserver-588dc98d7b-7slqw" Jan 24 00:50:42.365062 kubelet[2807]: I0124 00:50:42.364771 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed711851-0b36-4791-909e-763bb4210e62-config\") pod \"goldmane-7c778bb748-rczw4\" (UID: \"ed711851-0b36-4791-909e-763bb4210e62\") " pod="calico-system/goldmane-7c778bb748-rczw4" Jan 24 00:50:42.365062 kubelet[2807]: I0124 00:50:42.364788 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cn54\" (UniqueName: \"kubernetes.io/projected/b78a6970-784f-4d8d-b2e0-d7d30137d78a-kube-api-access-6cn54\") pod \"coredns-66bc5c9577-qc8gl\" (UID: \"b78a6970-784f-4d8d-b2e0-d7d30137d78a\") " pod="kube-system/coredns-66bc5c9577-qc8gl" Jan 24 00:50:42.365062 kubelet[2807]: I0124 00:50:42.364836 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7867w\" (UniqueName: \"kubernetes.io/projected/b9b2bf92-1ff5-4c12-9c42-3933e6be5da9-kube-api-access-7867w\") pod \"coredns-66bc5c9577-khdzd\" (UID: \"b9b2bf92-1ff5-4c12-9c42-3933e6be5da9\") " pod="kube-system/coredns-66bc5c9577-khdzd" Jan 24 00:50:42.365062 kubelet[2807]: I0124 00:50:42.364849 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wq2q\" (UniqueName: \"kubernetes.io/projected/dfe7191c-2a43-4c85-8819-687d02e5b78b-kube-api-access-5wq2q\") pod \"whisker-57f46f995f-xhj6p\" (UID: \"dfe7191c-2a43-4c85-8819-687d02e5b78b\") " pod="calico-system/whisker-57f46f995f-xhj6p" Jan 24 00:50:42.365175 kubelet[2807]: I0124 00:50:42.364864 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh8nb\" (UniqueName: \"kubernetes.io/projected/7daaf6bf-447d-4008-8095-834f38ae42be-kube-api-access-zh8nb\") pod \"calico-apiserver-588dc98d7b-7slqw\" (UID: \"7daaf6bf-447d-4008-8095-834f38ae42be\") " pod="calico-apiserver/calico-apiserver-588dc98d7b-7slqw" Jan 24 00:50:42.365175 kubelet[2807]: I0124 00:50:42.364879 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b9b2bf92-1ff5-4c12-9c42-3933e6be5da9-config-volume\") pod \"coredns-66bc5c9577-khdzd\" (UID: \"b9b2bf92-1ff5-4c12-9c42-3933e6be5da9\") " pod="kube-system/coredns-66bc5c9577-khdzd" Jan 24 00:50:42.365175 kubelet[2807]: I0124 00:50:42.364892 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpgrs\" (UniqueName: \"kubernetes.io/projected/7db270eb-ad67-4a41-9877-6d27ca50c210-kube-api-access-rpgrs\") pod \"calico-apiserver-588dc98d7b-pw4d6\" (UID: \"7db270eb-ad67-4a41-9877-6d27ca50c210\") " pod="calico-apiserver/calico-apiserver-588dc98d7b-pw4d6" Jan 24 00:50:42.365175 kubelet[2807]: I0124 00:50:42.364906 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xd76\" (UniqueName: \"kubernetes.io/projected/ed711851-0b36-4791-909e-763bb4210e62-kube-api-access-4xd76\") pod \"goldmane-7c778bb748-rczw4\" (UID: \"ed711851-0b36-4791-909e-763bb4210e62\") " pod="calico-system/goldmane-7c778bb748-rczw4" Jan 24 00:50:42.368190 systemd[1]: Created slice kubepods-besteffort-poded711851_0b36_4791_909e_763bb4210e62.slice - libcontainer container kubepods-besteffort-poded711851_0b36_4791_909e_763bb4210e62.slice. Jan 24 00:50:42.371296 systemd[1]: Created slice kubepods-besteffort-poddfe7191c_2a43_4c85_8819_687d02e5b78b.slice - libcontainer container kubepods-besteffort-poddfe7191c_2a43_4c85_8819_687d02e5b78b.slice. Jan 24 00:50:42.604890 containerd[1593]: time="2026-01-24T00:50:42.604853144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b94bdbd9d-dvkkh,Uid:8e3373a2-955e-4988-8f5b-bfdd560d0dba,Namespace:calico-system,Attempt:0,}" Jan 24 00:50:42.632276 kubelet[2807]: E0124 00:50:42.632213 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 24 00:50:42.634169 containerd[1593]: time="2026-01-24T00:50:42.633497282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-qc8gl,Uid:b78a6970-784f-4d8d-b2e0-d7d30137d78a,Namespace:kube-system,Attempt:0,}" Jan 24 00:50:42.645067 containerd[1593]: time="2026-01-24T00:50:42.644922351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-588dc98d7b-7slqw,Uid:7daaf6bf-447d-4008-8095-834f38ae42be,Namespace:calico-apiserver,Attempt:0,}" Jan 24 00:50:42.654045 kubelet[2807]: E0124 00:50:42.654010 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 24 00:50:42.656022 containerd[1593]: time="2026-01-24T00:50:42.655431238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-khdzd,Uid:b9b2bf92-1ff5-4c12-9c42-3933e6be5da9,Namespace:kube-system,Attempt:0,}" Jan 24 00:50:42.677474 containerd[1593]: time="2026-01-24T00:50:42.677432174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-rczw4,Uid:ed711851-0b36-4791-909e-763bb4210e62,Namespace:calico-system,Attempt:0,}" Jan 24 00:50:42.678075 containerd[1593]: time="2026-01-24T00:50:42.678041444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-588dc98d7b-pw4d6,Uid:7db270eb-ad67-4a41-9877-6d27ca50c210,Namespace:calico-apiserver,Attempt:0,}" Jan 24 00:50:42.679198 containerd[1593]: time="2026-01-24T00:50:42.679062557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57f46f995f-xhj6p,Uid:dfe7191c-2a43-4c85-8819-687d02e5b78b,Namespace:calico-system,Attempt:0,}" Jan 24 00:50:42.705977 systemd[1]: Created slice kubepods-besteffort-podf3c105a9_c51e_4146_8f7b_cd2b2b3fff5c.slice - libcontainer container kubepods-besteffort-podf3c105a9_c51e_4146_8f7b_cd2b2b3fff5c.slice. Jan 24 00:50:42.715480 containerd[1593]: time="2026-01-24T00:50:42.715442676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dbqvx,Uid:f3c105a9-c51e-4146-8f7b-cd2b2b3fff5c,Namespace:calico-system,Attempt:0,}" Jan 24 00:50:42.752057 containerd[1593]: time="2026-01-24T00:50:42.751900296Z" level=error msg="Failed to destroy network for sandbox \"b22eaf8a6b16b2a2fa945b4a45fe7c8075fc821812735a43d713e483ec3a3651\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:50:42.755309 containerd[1593]: time="2026-01-24T00:50:42.755195871Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b94bdbd9d-dvkkh,Uid:8e3373a2-955e-4988-8f5b-bfdd560d0dba,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b22eaf8a6b16b2a2fa945b4a45fe7c8075fc821812735a43d713e483ec3a3651\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:50:42.755539 kubelet[2807]: E0124 00:50:42.755510 2807 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b22eaf8a6b16b2a2fa945b4a45fe7c8075fc821812735a43d713e483ec3a3651\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:50:42.755676 kubelet[2807]: E0124 00:50:42.755643 2807 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b22eaf8a6b16b2a2fa945b4a45fe7c8075fc821812735a43d713e483ec3a3651\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b94bdbd9d-dvkkh" Jan 24 00:50:42.756413 kubelet[2807]: E0124 00:50:42.756112 2807 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b22eaf8a6b16b2a2fa945b4a45fe7c8075fc821812735a43d713e483ec3a3651\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b94bdbd9d-dvkkh" Jan 24 00:50:42.756413 kubelet[2807]: E0124 00:50:42.756197 2807 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7b94bdbd9d-dvkkh_calico-system(8e3373a2-955e-4988-8f5b-bfdd560d0dba)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7b94bdbd9d-dvkkh_calico-system(8e3373a2-955e-4988-8f5b-bfdd560d0dba)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b22eaf8a6b16b2a2fa945b4a45fe7c8075fc821812735a43d713e483ec3a3651\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7b94bdbd9d-dvkkh" podUID="8e3373a2-955e-4988-8f5b-bfdd560d0dba" Jan 24 00:50:42.810214 kubelet[2807]: E0124 00:50:42.809667 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 24 00:50:42.815345 containerd[1593]: time="2026-01-24T00:50:42.815303560Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 24 00:50:42.864214 containerd[1593]: time="2026-01-24T00:50:42.863999200Z" level=error msg="Failed to destroy network for sandbox \"94fb8cae8248f827a963b4ba948724c3097bdf9185dc87b6693350665cbe337b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:50:42.871101 containerd[1593]: time="2026-01-24T00:50:42.870844290Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-588dc98d7b-pw4d6,Uid:7db270eb-ad67-4a41-9877-6d27ca50c210,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"94fb8cae8248f827a963b4ba948724c3097bdf9185dc87b6693350665cbe337b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:50:42.871846 kubelet[2807]: E0124 00:50:42.871062 2807 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94fb8cae8248f827a963b4ba948724c3097bdf9185dc87b6693350665cbe337b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:50:42.871846 kubelet[2807]: E0124 00:50:42.871112 2807 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94fb8cae8248f827a963b4ba948724c3097bdf9185dc87b6693350665cbe337b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-588dc98d7b-pw4d6" Jan 24 00:50:42.871846 kubelet[2807]: E0124 00:50:42.871132 2807 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94fb8cae8248f827a963b4ba948724c3097bdf9185dc87b6693350665cbe337b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-588dc98d7b-pw4d6" Jan 24 00:50:42.871950 kubelet[2807]: E0124 00:50:42.871188 2807 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-588dc98d7b-pw4d6_calico-apiserver(7db270eb-ad67-4a41-9877-6d27ca50c210)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-588dc98d7b-pw4d6_calico-apiserver(7db270eb-ad67-4a41-9877-6d27ca50c210)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"94fb8cae8248f827a963b4ba948724c3097bdf9185dc87b6693350665cbe337b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-588dc98d7b-pw4d6" podUID="7db270eb-ad67-4a41-9877-6d27ca50c210" Jan 24 00:50:42.883299 containerd[1593]: time="2026-01-24T00:50:42.883244911Z" level=error msg="Failed to destroy network for sandbox \"98deb72016ba0213256668d2a031188244bcab45616f4d6c222f06cfb9750129\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:50:42.887391 containerd[1593]: time="2026-01-24T00:50:42.887354828Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-khdzd,Uid:b9b2bf92-1ff5-4c12-9c42-3933e6be5da9,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"98deb72016ba0213256668d2a031188244bcab45616f4d6c222f06cfb9750129\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:50:42.888267 kubelet[2807]: E0124 00:50:42.887590 2807 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98deb72016ba0213256668d2a031188244bcab45616f4d6c222f06cfb9750129\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:50:42.888267 kubelet[2807]: E0124 00:50:42.887635 2807 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98deb72016ba0213256668d2a031188244bcab45616f4d6c222f06cfb9750129\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-khdzd" Jan 24 00:50:42.888267 kubelet[2807]: E0124 00:50:42.887654 2807 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98deb72016ba0213256668d2a031188244bcab45616f4d6c222f06cfb9750129\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-khdzd" Jan 24 00:50:42.888365 kubelet[2807]: E0124 00:50:42.887699 2807 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-khdzd_kube-system(b9b2bf92-1ff5-4c12-9c42-3933e6be5da9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-khdzd_kube-system(b9b2bf92-1ff5-4c12-9c42-3933e6be5da9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"98deb72016ba0213256668d2a031188244bcab45616f4d6c222f06cfb9750129\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-khdzd" podUID="b9b2bf92-1ff5-4c12-9c42-3933e6be5da9" Jan 24 00:50:42.890832 containerd[1593]: time="2026-01-24T00:50:42.890788543Z" level=error msg="Failed to destroy network for sandbox \"f780e0a8b09989f7d4d1f9d88bf2714f47ae6150478b212aad0248ca14fc2208\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:50:42.893665 containerd[1593]: time="2026-01-24T00:50:42.893625008Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-588dc98d7b-7slqw,Uid:7daaf6bf-447d-4008-8095-834f38ae42be,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f780e0a8b09989f7d4d1f9d88bf2714f47ae6150478b212aad0248ca14fc2208\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:50:42.894067 kubelet[2807]: E0124 00:50:42.894045 2807 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f780e0a8b09989f7d4d1f9d88bf2714f47ae6150478b212aad0248ca14fc2208\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:50:42.894231 kubelet[2807]: E0124 00:50:42.894126 2807 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f780e0a8b09989f7d4d1f9d88bf2714f47ae6150478b212aad0248ca14fc2208\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-588dc98d7b-7slqw" Jan 24 00:50:42.894231 kubelet[2807]: E0124 00:50:42.894142 2807 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f780e0a8b09989f7d4d1f9d88bf2714f47ae6150478b212aad0248ca14fc2208\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-588dc98d7b-7slqw" Jan 24 00:50:42.894619 kubelet[2807]: E0124 00:50:42.894366 2807 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-588dc98d7b-7slqw_calico-apiserver(7daaf6bf-447d-4008-8095-834f38ae42be)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-588dc98d7b-7slqw_calico-apiserver(7daaf6bf-447d-4008-8095-834f38ae42be)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f780e0a8b09989f7d4d1f9d88bf2714f47ae6150478b212aad0248ca14fc2208\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-588dc98d7b-7slqw" podUID="7daaf6bf-447d-4008-8095-834f38ae42be" Jan 24 00:50:42.895660 containerd[1593]: time="2026-01-24T00:50:42.895618332Z" level=error msg="Failed to destroy network for sandbox \"e560f7e940b021a41339f5c1185cc1af16ad8ddfa9ae5a2445ccab8896feeb75\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:50:42.897299 containerd[1593]: time="2026-01-24T00:50:42.897247114Z" level=error msg="Failed to destroy network for sandbox \"b5237ea9ea01123449ad07f1fd7ddbbf350e62e405a013926d99e8e7d751a75c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:50:42.897580 containerd[1593]: time="2026-01-24T00:50:42.897538565Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dbqvx,Uid:f3c105a9-c51e-4146-8f7b-cd2b2b3fff5c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e560f7e940b021a41339f5c1185cc1af16ad8ddfa9ae5a2445ccab8896feeb75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:50:42.898142 kubelet[2807]: E0124 00:50:42.898074 2807 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e560f7e940b021a41339f5c1185cc1af16ad8ddfa9ae5a2445ccab8896feeb75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:50:42.898142 kubelet[2807]: E0124 00:50:42.898103 2807 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e560f7e940b021a41339f5c1185cc1af16ad8ddfa9ae5a2445ccab8896feeb75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dbqvx" Jan 24 00:50:42.898317 kubelet[2807]: E0124 00:50:42.898116 2807 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e560f7e940b021a41339f5c1185cc1af16ad8ddfa9ae5a2445ccab8896feeb75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dbqvx" Jan 24 00:50:42.898631 kubelet[2807]: E0124 00:50:42.898604 2807 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dbqvx_calico-system(f3c105a9-c51e-4146-8f7b-cd2b2b3fff5c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dbqvx_calico-system(f3c105a9-c51e-4146-8f7b-cd2b2b3fff5c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e560f7e940b021a41339f5c1185cc1af16ad8ddfa9ae5a2445ccab8896feeb75\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dbqvx" podUID="f3c105a9-c51e-4146-8f7b-cd2b2b3fff5c" Jan 24 00:50:42.899926 containerd[1593]: time="2026-01-24T00:50:42.899891688Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-qc8gl,Uid:b78a6970-784f-4d8d-b2e0-d7d30137d78a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5237ea9ea01123449ad07f1fd7ddbbf350e62e405a013926d99e8e7d751a75c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:50:42.900199 kubelet[2807]: E0124 00:50:42.900180 2807 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5237ea9ea01123449ad07f1fd7ddbbf350e62e405a013926d99e8e7d751a75c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:50:42.900718 kubelet[2807]: E0124 00:50:42.900246 2807 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5237ea9ea01123449ad07f1fd7ddbbf350e62e405a013926d99e8e7d751a75c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-qc8gl" Jan 24 00:50:42.900718 kubelet[2807]: E0124 00:50:42.900260 2807 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5237ea9ea01123449ad07f1fd7ddbbf350e62e405a013926d99e8e7d751a75c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-qc8gl" Jan 24 00:50:42.900875 kubelet[2807]: E0124 00:50:42.900291 2807 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-qc8gl_kube-system(b78a6970-784f-4d8d-b2e0-d7d30137d78a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-qc8gl_kube-system(b78a6970-784f-4d8d-b2e0-d7d30137d78a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b5237ea9ea01123449ad07f1fd7ddbbf350e62e405a013926d99e8e7d751a75c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-qc8gl" podUID="b78a6970-784f-4d8d-b2e0-d7d30137d78a" Jan 24 00:50:42.901213 containerd[1593]: time="2026-01-24T00:50:42.901194191Z" level=error msg="Failed to destroy network for sandbox \"acdd45db539cede390f7a70c51278b6ea7cb16030b06131b3248f2c2dad7c8b9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:50:42.902995 containerd[1593]: time="2026-01-24T00:50:42.902682213Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57f46f995f-xhj6p,Uid:dfe7191c-2a43-4c85-8819-687d02e5b78b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"acdd45db539cede390f7a70c51278b6ea7cb16030b06131b3248f2c2dad7c8b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:50:42.903521 kubelet[2807]: E0124 00:50:42.903503 2807 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"acdd45db539cede390f7a70c51278b6ea7cb16030b06131b3248f2c2dad7c8b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:50:42.903641 kubelet[2807]: E0124 00:50:42.903622 2807 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"acdd45db539cede390f7a70c51278b6ea7cb16030b06131b3248f2c2dad7c8b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-57f46f995f-xhj6p" Jan 24 00:50:42.903728 kubelet[2807]: E0124 00:50:42.903707 2807 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"acdd45db539cede390f7a70c51278b6ea7cb16030b06131b3248f2c2dad7c8b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-57f46f995f-xhj6p" Jan 24 00:50:42.903951 kubelet[2807]: E0124 00:50:42.903840 2807 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-57f46f995f-xhj6p_calico-system(dfe7191c-2a43-4c85-8819-687d02e5b78b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-57f46f995f-xhj6p_calico-system(dfe7191c-2a43-4c85-8819-687d02e5b78b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"acdd45db539cede390f7a70c51278b6ea7cb16030b06131b3248f2c2dad7c8b9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-57f46f995f-xhj6p" podUID="dfe7191c-2a43-4c85-8819-687d02e5b78b" Jan 24 00:50:42.925479 containerd[1593]: time="2026-01-24T00:50:42.925440321Z" level=error msg="Failed to destroy network for sandbox \"446c10b7f0c815d5b56d2edab10ba384379b634b454387e28dc018fdcd527334\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:50:42.926979 containerd[1593]: time="2026-01-24T00:50:42.926945412Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-rczw4,Uid:ed711851-0b36-4791-909e-763bb4210e62,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"446c10b7f0c815d5b56d2edab10ba384379b634b454387e28dc018fdcd527334\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:50:42.927222 kubelet[2807]: E0124 00:50:42.927172 2807 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"446c10b7f0c815d5b56d2edab10ba384379b634b454387e28dc018fdcd527334\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:50:42.927287 kubelet[2807]: E0124 00:50:42.927218 2807 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"446c10b7f0c815d5b56d2edab10ba384379b634b454387e28dc018fdcd527334\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-rczw4" Jan 24 00:50:42.927287 kubelet[2807]: E0124 00:50:42.927238 2807 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"446c10b7f0c815d5b56d2edab10ba384379b634b454387e28dc018fdcd527334\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-rczw4" Jan 24 00:50:42.927344 kubelet[2807]: E0124 00:50:42.927308 2807 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-rczw4_calico-system(ed711851-0b36-4791-909e-763bb4210e62)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-rczw4_calico-system(ed711851-0b36-4791-909e-763bb4210e62)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"446c10b7f0c815d5b56d2edab10ba384379b634b454387e28dc018fdcd527334\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-rczw4" podUID="ed711851-0b36-4791-909e-763bb4210e62" Jan 24 00:50:43.548285 systemd[1]: run-netns-cni\x2db948a92b\x2dbd79\x2d52dc\x2d1a80\x2dfa961f390899.mount: Deactivated successfully. Jan 24 00:50:43.548551 systemd[1]: run-netns-cni\x2d3e647512\x2df19e\x2dc246\x2d9fe6\x2d9b59a7cf7926.mount: Deactivated successfully. Jan 24 00:50:43.548738 systemd[1]: run-netns-cni\x2d33cb9ce8\x2d3c20\x2d493d\x2d03a3\x2d27364b0e87a1.mount: Deactivated successfully. Jan 24 00:50:43.548924 systemd[1]: run-netns-cni\x2db3f804ed\x2d66d1\x2d4a90\x2d8009\x2dd5a9ccb8d73e.mount: Deactivated successfully. Jan 24 00:50:46.146355 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount270371159.mount: Deactivated successfully. Jan 24 00:50:46.175267 containerd[1593]: time="2026-01-24T00:50:46.175220765Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:50:46.176072 containerd[1593]: time="2026-01-24T00:50:46.176023797Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Jan 24 00:50:46.176808 containerd[1593]: time="2026-01-24T00:50:46.176750778Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:50:46.178261 containerd[1593]: time="2026-01-24T00:50:46.178225220Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:50:46.178755 containerd[1593]: time="2026-01-24T00:50:46.178638940Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 3.36328835s" Jan 24 00:50:46.178755 containerd[1593]: time="2026-01-24T00:50:46.178662581Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 24 00:50:46.197690 containerd[1593]: time="2026-01-24T00:50:46.197645527Z" level=info msg="CreateContainer within sandbox \"b990dcc1a94ea7d53562a276e3c44c19667bfb2f3972b6ae2afe4bd74da7e0f1\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 24 00:50:46.207385 containerd[1593]: time="2026-01-24T00:50:46.207358031Z" level=info msg="Container 11bb10d703849ef444b2b5dcba1839da7bdbf88a18478565f8bf7f4b7bd444cc: CDI devices from CRI Config.CDIDevices: []" Jan 24 00:50:46.212066 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2526067394.mount: Deactivated successfully. Jan 24 00:50:46.217481 containerd[1593]: time="2026-01-24T00:50:46.217450915Z" level=info msg="CreateContainer within sandbox \"b990dcc1a94ea7d53562a276e3c44c19667bfb2f3972b6ae2afe4bd74da7e0f1\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"11bb10d703849ef444b2b5dcba1839da7bdbf88a18478565f8bf7f4b7bd444cc\"" Jan 24 00:50:46.219162 containerd[1593]: time="2026-01-24T00:50:46.218151275Z" level=info msg="StartContainer for \"11bb10d703849ef444b2b5dcba1839da7bdbf88a18478565f8bf7f4b7bd444cc\"" Jan 24 00:50:46.220377 containerd[1593]: time="2026-01-24T00:50:46.220348508Z" level=info msg="connecting to shim 11bb10d703849ef444b2b5dcba1839da7bdbf88a18478565f8bf7f4b7bd444cc" address="unix:///run/containerd/s/acb5c19b390f7efdabd1364724fb0917a20607493b36d988a625a9c615735e15" protocol=ttrpc version=3 Jan 24 00:50:46.267968 systemd[1]: Started cri-containerd-11bb10d703849ef444b2b5dcba1839da7bdbf88a18478565f8bf7f4b7bd444cc.scope - libcontainer container 11bb10d703849ef444b2b5dcba1839da7bdbf88a18478565f8bf7f4b7bd444cc. Jan 24 00:50:46.330000 audit: BPF prog-id=184 op=LOAD Jan 24 00:50:46.336819 kernel: audit: type=1334 audit(1769215846.330:590): prog-id=184 op=LOAD Jan 24 00:50:46.336882 kernel: audit: type=1300 audit(1769215846.330:590): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3308 pid=3817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:46.330000 audit[3817]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3308 pid=3817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:46.350510 kernel: audit: type=1327 audit(1769215846.330:590): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131626231306437303338343965663434346232623564636261313833 Jan 24 00:50:46.330000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131626231306437303338343965663434346232623564636261313833 Jan 24 00:50:46.352744 kernel: audit: type=1334 audit(1769215846.331:591): prog-id=185 op=LOAD Jan 24 00:50:46.331000 audit: BPF prog-id=185 op=LOAD Jan 24 00:50:46.331000 audit[3817]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3308 pid=3817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:46.360884 kernel: audit: type=1300 audit(1769215846.331:591): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3308 pid=3817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:46.331000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131626231306437303338343965663434346232623564636261313833 Jan 24 00:50:46.372174 kernel: audit: type=1327 audit(1769215846.331:591): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131626231306437303338343965663434346232623564636261313833 Jan 24 00:50:46.372231 kernel: audit: type=1334 audit(1769215846.331:592): prog-id=185 op=UNLOAD Jan 24 00:50:46.331000 audit: BPF prog-id=185 op=UNLOAD Jan 24 00:50:46.331000 audit[3817]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3308 pid=3817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:46.374560 kernel: audit: type=1300 audit(1769215846.331:592): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3308 pid=3817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:46.385219 kernel: audit: type=1327 audit(1769215846.331:592): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131626231306437303338343965663434346232623564636261313833 Jan 24 00:50:46.331000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131626231306437303338343965663434346232623564636261313833 Jan 24 00:50:46.331000 audit: BPF prog-id=184 op=UNLOAD Jan 24 00:50:46.331000 audit[3817]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3308 pid=3817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:46.331000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131626231306437303338343965663434346232623564636261313833 Jan 24 00:50:46.331000 audit: BPF prog-id=186 op=LOAD Jan 24 00:50:46.331000 audit[3817]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3308 pid=3817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:46.331000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131626231306437303338343965663434346232623564636261313833 Jan 24 00:50:46.396280 containerd[1593]: time="2026-01-24T00:50:46.396227781Z" level=info msg="StartContainer for \"11bb10d703849ef444b2b5dcba1839da7bdbf88a18478565f8bf7f4b7bd444cc\" returns successfully" Jan 24 00:50:46.485405 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 24 00:50:46.485937 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 24 00:50:46.696117 kubelet[2807]: I0124 00:50:46.695465 2807 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wq2q\" (UniqueName: \"kubernetes.io/projected/dfe7191c-2a43-4c85-8819-687d02e5b78b-kube-api-access-5wq2q\") pod \"dfe7191c-2a43-4c85-8819-687d02e5b78b\" (UID: \"dfe7191c-2a43-4c85-8819-687d02e5b78b\") " Jan 24 00:50:46.696117 kubelet[2807]: I0124 00:50:46.695505 2807 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/dfe7191c-2a43-4c85-8819-687d02e5b78b-whisker-backend-key-pair\") pod \"dfe7191c-2a43-4c85-8819-687d02e5b78b\" (UID: \"dfe7191c-2a43-4c85-8819-687d02e5b78b\") " Jan 24 00:50:46.696117 kubelet[2807]: I0124 00:50:46.695527 2807 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dfe7191c-2a43-4c85-8819-687d02e5b78b-whisker-ca-bundle\") pod \"dfe7191c-2a43-4c85-8819-687d02e5b78b\" (UID: \"dfe7191c-2a43-4c85-8819-687d02e5b78b\") " Jan 24 00:50:46.699200 kubelet[2807]: I0124 00:50:46.699079 2807 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dfe7191c-2a43-4c85-8819-687d02e5b78b-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "dfe7191c-2a43-4c85-8819-687d02e5b78b" (UID: "dfe7191c-2a43-4c85-8819-687d02e5b78b"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 24 00:50:46.705785 kubelet[2807]: I0124 00:50:46.705738 2807 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfe7191c-2a43-4c85-8819-687d02e5b78b-kube-api-access-5wq2q" (OuterVolumeSpecName: "kube-api-access-5wq2q") pod "dfe7191c-2a43-4c85-8819-687d02e5b78b" (UID: "dfe7191c-2a43-4c85-8819-687d02e5b78b"). InnerVolumeSpecName "kube-api-access-5wq2q". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 24 00:50:46.708235 kubelet[2807]: I0124 00:50:46.708178 2807 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfe7191c-2a43-4c85-8819-687d02e5b78b-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "dfe7191c-2a43-4c85-8819-687d02e5b78b" (UID: "dfe7191c-2a43-4c85-8819-687d02e5b78b"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 24 00:50:46.796573 kubelet[2807]: I0124 00:50:46.796527 2807 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5wq2q\" (UniqueName: \"kubernetes.io/projected/dfe7191c-2a43-4c85-8819-687d02e5b78b-kube-api-access-5wq2q\") on node \"172-239-50-209\" DevicePath \"\"" Jan 24 00:50:46.796573 kubelet[2807]: I0124 00:50:46.796567 2807 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/dfe7191c-2a43-4c85-8819-687d02e5b78b-whisker-backend-key-pair\") on node \"172-239-50-209\" DevicePath \"\"" Jan 24 00:50:46.796573 kubelet[2807]: I0124 00:50:46.796577 2807 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dfe7191c-2a43-4c85-8819-687d02e5b78b-whisker-ca-bundle\") on node \"172-239-50-209\" DevicePath \"\"" Jan 24 00:50:46.819886 kubelet[2807]: E0124 00:50:46.819820 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 24 00:50:46.825976 systemd[1]: Removed slice kubepods-besteffort-poddfe7191c_2a43_4c85_8819_687d02e5b78b.slice - libcontainer container kubepods-besteffort-poddfe7191c_2a43_4c85_8819_687d02e5b78b.slice. Jan 24 00:50:46.839874 kubelet[2807]: I0124 00:50:46.839726 2807 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-rvvw8" podStartSLOduration=1.669249083 podStartE2EDuration="10.839711943s" podCreationTimestamp="2026-01-24 00:50:36 +0000 UTC" firstStartedPulling="2026-01-24 00:50:37.008810821 +0000 UTC m=+18.409522637" lastFinishedPulling="2026-01-24 00:50:46.179273681 +0000 UTC m=+27.579985497" observedRunningTime="2026-01-24 00:50:46.838229652 +0000 UTC m=+28.238941468" watchObservedRunningTime="2026-01-24 00:50:46.839711943 +0000 UTC m=+28.240423759" Jan 24 00:50:46.889495 systemd[1]: Created slice kubepods-besteffort-pod6b257eba_5f3e_4c58_93ff_911ee6aa75db.slice - libcontainer container kubepods-besteffort-pod6b257eba_5f3e_4c58_93ff_911ee6aa75db.slice. Jan 24 00:50:46.998247 kubelet[2807]: I0124 00:50:46.997955 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6b257eba-5f3e-4c58-93ff-911ee6aa75db-whisker-backend-key-pair\") pod \"whisker-57bd667b79-zp7m5\" (UID: \"6b257eba-5f3e-4c58-93ff-911ee6aa75db\") " pod="calico-system/whisker-57bd667b79-zp7m5" Jan 24 00:50:46.998247 kubelet[2807]: I0124 00:50:46.998004 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6b257eba-5f3e-4c58-93ff-911ee6aa75db-whisker-ca-bundle\") pod \"whisker-57bd667b79-zp7m5\" (UID: \"6b257eba-5f3e-4c58-93ff-911ee6aa75db\") " pod="calico-system/whisker-57bd667b79-zp7m5" Jan 24 00:50:46.998247 kubelet[2807]: I0124 00:50:46.998077 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rz282\" (UniqueName: \"kubernetes.io/projected/6b257eba-5f3e-4c58-93ff-911ee6aa75db-kube-api-access-rz282\") pod \"whisker-57bd667b79-zp7m5\" (UID: \"6b257eba-5f3e-4c58-93ff-911ee6aa75db\") " pod="calico-system/whisker-57bd667b79-zp7m5" Jan 24 00:50:47.149630 systemd[1]: var-lib-kubelet-pods-dfe7191c\x2d2a43\x2d4c85\x2d8819\x2d687d02e5b78b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5wq2q.mount: Deactivated successfully. Jan 24 00:50:47.150173 systemd[1]: var-lib-kubelet-pods-dfe7191c\x2d2a43\x2d4c85\x2d8819\x2d687d02e5b78b-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 24 00:50:47.196982 containerd[1593]: time="2026-01-24T00:50:47.196895286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57bd667b79-zp7m5,Uid:6b257eba-5f3e-4c58-93ff-911ee6aa75db,Namespace:calico-system,Attempt:0,}" Jan 24 00:50:47.339678 systemd-networkd[1499]: calib0adeee4a39: Link UP Jan 24 00:50:47.340645 systemd-networkd[1499]: calib0adeee4a39: Gained carrier Jan 24 00:50:47.357859 containerd[1593]: 2026-01-24 00:50:47.222 [INFO][3883] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 24 00:50:47.357859 containerd[1593]: 2026-01-24 00:50:47.264 [INFO][3883] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--50--209-k8s-whisker--57bd667b79--zp7m5-eth0 whisker-57bd667b79- calico-system 6b257eba-5f3e-4c58-93ff-911ee6aa75db 886 0 2026-01-24 00:50:46 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:57bd667b79 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s 172-239-50-209 whisker-57bd667b79-zp7m5 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calib0adeee4a39 [] [] }} ContainerID="b7509223b3b57d79fab23b800645d13dfb505c7247842130261b66f259a48ad9" Namespace="calico-system" Pod="whisker-57bd667b79-zp7m5" WorkloadEndpoint="172--239--50--209-k8s-whisker--57bd667b79--zp7m5-" Jan 24 00:50:47.357859 containerd[1593]: 2026-01-24 00:50:47.265 [INFO][3883] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b7509223b3b57d79fab23b800645d13dfb505c7247842130261b66f259a48ad9" Namespace="calico-system" Pod="whisker-57bd667b79-zp7m5" WorkloadEndpoint="172--239--50--209-k8s-whisker--57bd667b79--zp7m5-eth0" Jan 24 00:50:47.357859 containerd[1593]: 2026-01-24 00:50:47.292 [INFO][3893] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b7509223b3b57d79fab23b800645d13dfb505c7247842130261b66f259a48ad9" HandleID="k8s-pod-network.b7509223b3b57d79fab23b800645d13dfb505c7247842130261b66f259a48ad9" Workload="172--239--50--209-k8s-whisker--57bd667b79--zp7m5-eth0" Jan 24 00:50:47.358066 containerd[1593]: 2026-01-24 00:50:47.292 [INFO][3893] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b7509223b3b57d79fab23b800645d13dfb505c7247842130261b66f259a48ad9" HandleID="k8s-pod-network.b7509223b3b57d79fab23b800645d13dfb505c7247842130261b66f259a48ad9" Workload="172--239--50--209-k8s-whisker--57bd667b79--zp7m5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5bf0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-239-50-209", "pod":"whisker-57bd667b79-zp7m5", "timestamp":"2026-01-24 00:50:47.292613333 +0000 UTC"}, Hostname:"172-239-50-209", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:50:47.358066 containerd[1593]: 2026-01-24 00:50:47.292 [INFO][3893] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:50:47.358066 containerd[1593]: 2026-01-24 00:50:47.293 [INFO][3893] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:50:47.358066 containerd[1593]: 2026-01-24 00:50:47.293 [INFO][3893] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-50-209' Jan 24 00:50:47.358066 containerd[1593]: 2026-01-24 00:50:47.300 [INFO][3893] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b7509223b3b57d79fab23b800645d13dfb505c7247842130261b66f259a48ad9" host="172-239-50-209" Jan 24 00:50:47.358066 containerd[1593]: 2026-01-24 00:50:47.304 [INFO][3893] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-50-209" Jan 24 00:50:47.358066 containerd[1593]: 2026-01-24 00:50:47.307 [INFO][3893] ipam/ipam.go 511: Trying affinity for 192.168.113.0/26 host="172-239-50-209" Jan 24 00:50:47.358066 containerd[1593]: 2026-01-24 00:50:47.309 [INFO][3893] ipam/ipam.go 158: Attempting to load block cidr=192.168.113.0/26 host="172-239-50-209" Jan 24 00:50:47.358066 containerd[1593]: 2026-01-24 00:50:47.311 [INFO][3893] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.113.0/26 host="172-239-50-209" Jan 24 00:50:47.358066 containerd[1593]: 2026-01-24 00:50:47.311 [INFO][3893] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.113.0/26 handle="k8s-pod-network.b7509223b3b57d79fab23b800645d13dfb505c7247842130261b66f259a48ad9" host="172-239-50-209" Jan 24 00:50:47.358285 containerd[1593]: 2026-01-24 00:50:47.313 [INFO][3893] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b7509223b3b57d79fab23b800645d13dfb505c7247842130261b66f259a48ad9 Jan 24 00:50:47.358285 containerd[1593]: 2026-01-24 00:50:47.316 [INFO][3893] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.113.0/26 handle="k8s-pod-network.b7509223b3b57d79fab23b800645d13dfb505c7247842130261b66f259a48ad9" host="172-239-50-209" Jan 24 00:50:47.358285 containerd[1593]: 2026-01-24 00:50:47.320 [INFO][3893] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.113.1/26] block=192.168.113.0/26 handle="k8s-pod-network.b7509223b3b57d79fab23b800645d13dfb505c7247842130261b66f259a48ad9" host="172-239-50-209" Jan 24 00:50:47.358285 containerd[1593]: 2026-01-24 00:50:47.320 [INFO][3893] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.113.1/26] handle="k8s-pod-network.b7509223b3b57d79fab23b800645d13dfb505c7247842130261b66f259a48ad9" host="172-239-50-209" Jan 24 00:50:47.358285 containerd[1593]: 2026-01-24 00:50:47.320 [INFO][3893] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:50:47.358285 containerd[1593]: 2026-01-24 00:50:47.320 [INFO][3893] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.113.1/26] IPv6=[] ContainerID="b7509223b3b57d79fab23b800645d13dfb505c7247842130261b66f259a48ad9" HandleID="k8s-pod-network.b7509223b3b57d79fab23b800645d13dfb505c7247842130261b66f259a48ad9" Workload="172--239--50--209-k8s-whisker--57bd667b79--zp7m5-eth0" Jan 24 00:50:47.358402 containerd[1593]: 2026-01-24 00:50:47.326 [INFO][3883] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b7509223b3b57d79fab23b800645d13dfb505c7247842130261b66f259a48ad9" Namespace="calico-system" Pod="whisker-57bd667b79-zp7m5" WorkloadEndpoint="172--239--50--209-k8s-whisker--57bd667b79--zp7m5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--50--209-k8s-whisker--57bd667b79--zp7m5-eth0", GenerateName:"whisker-57bd667b79-", Namespace:"calico-system", SelfLink:"", UID:"6b257eba-5f3e-4c58-93ff-911ee6aa75db", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 50, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"57bd667b79", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-50-209", ContainerID:"", Pod:"whisker-57bd667b79-zp7m5", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.113.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calib0adeee4a39", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:50:47.358402 containerd[1593]: 2026-01-24 00:50:47.327 [INFO][3883] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.113.1/32] ContainerID="b7509223b3b57d79fab23b800645d13dfb505c7247842130261b66f259a48ad9" Namespace="calico-system" Pod="whisker-57bd667b79-zp7m5" WorkloadEndpoint="172--239--50--209-k8s-whisker--57bd667b79--zp7m5-eth0" Jan 24 00:50:47.358473 containerd[1593]: 2026-01-24 00:50:47.328 [INFO][3883] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib0adeee4a39 ContainerID="b7509223b3b57d79fab23b800645d13dfb505c7247842130261b66f259a48ad9" Namespace="calico-system" Pod="whisker-57bd667b79-zp7m5" WorkloadEndpoint="172--239--50--209-k8s-whisker--57bd667b79--zp7m5-eth0" Jan 24 00:50:47.358473 containerd[1593]: 2026-01-24 00:50:47.341 [INFO][3883] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b7509223b3b57d79fab23b800645d13dfb505c7247842130261b66f259a48ad9" Namespace="calico-system" Pod="whisker-57bd667b79-zp7m5" WorkloadEndpoint="172--239--50--209-k8s-whisker--57bd667b79--zp7m5-eth0" Jan 24 00:50:47.358518 containerd[1593]: 2026-01-24 00:50:47.341 [INFO][3883] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b7509223b3b57d79fab23b800645d13dfb505c7247842130261b66f259a48ad9" Namespace="calico-system" Pod="whisker-57bd667b79-zp7m5" WorkloadEndpoint="172--239--50--209-k8s-whisker--57bd667b79--zp7m5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--50--209-k8s-whisker--57bd667b79--zp7m5-eth0", GenerateName:"whisker-57bd667b79-", Namespace:"calico-system", SelfLink:"", UID:"6b257eba-5f3e-4c58-93ff-911ee6aa75db", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 50, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"57bd667b79", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-50-209", ContainerID:"b7509223b3b57d79fab23b800645d13dfb505c7247842130261b66f259a48ad9", Pod:"whisker-57bd667b79-zp7m5", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.113.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calib0adeee4a39", MAC:"7e:80:b5:12:80:f4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:50:47.358568 containerd[1593]: 2026-01-24 00:50:47.353 [INFO][3883] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b7509223b3b57d79fab23b800645d13dfb505c7247842130261b66f259a48ad9" Namespace="calico-system" Pod="whisker-57bd667b79-zp7m5" WorkloadEndpoint="172--239--50--209-k8s-whisker--57bd667b79--zp7m5-eth0" Jan 24 00:50:47.405717 containerd[1593]: time="2026-01-24T00:50:47.405652942Z" level=info msg="connecting to shim b7509223b3b57d79fab23b800645d13dfb505c7247842130261b66f259a48ad9" address="unix:///run/containerd/s/0df2c0342ee60ecf0efd2d68003a080d3e9028b690a5786910f73cceb7f53750" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:50:47.437949 systemd[1]: Started cri-containerd-b7509223b3b57d79fab23b800645d13dfb505c7247842130261b66f259a48ad9.scope - libcontainer container b7509223b3b57d79fab23b800645d13dfb505c7247842130261b66f259a48ad9. Jan 24 00:50:47.452900 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 24 00:50:47.452976 kernel: audit: type=1334 audit(1769215847.449:595): prog-id=187 op=LOAD Jan 24 00:50:47.449000 audit: BPF prog-id=187 op=LOAD Jan 24 00:50:47.454897 kernel: audit: type=1334 audit(1769215847.450:596): prog-id=188 op=LOAD Jan 24 00:50:47.450000 audit: BPF prog-id=188 op=LOAD Jan 24 00:50:47.463759 kernel: audit: type=1300 audit(1769215847.450:596): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3916 pid=3927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:47.450000 audit[3927]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3916 pid=3927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:47.450000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237353039323233623362353764373966616232336238303036343564 Jan 24 00:50:47.473382 kernel: audit: type=1327 audit(1769215847.450:596): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237353039323233623362353764373966616232336238303036343564 Jan 24 00:50:47.473427 kernel: audit: type=1334 audit(1769215847.450:597): prog-id=188 op=UNLOAD Jan 24 00:50:47.450000 audit: BPF prog-id=188 op=UNLOAD Jan 24 00:50:47.477732 kernel: audit: type=1300 audit(1769215847.450:597): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3916 pid=3927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:47.450000 audit[3927]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3916 pid=3927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:47.488912 kernel: audit: type=1327 audit(1769215847.450:597): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237353039323233623362353764373966616232336238303036343564 Jan 24 00:50:47.450000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237353039323233623362353764373966616232336238303036343564 Jan 24 00:50:47.491169 kernel: audit: type=1334 audit(1769215847.450:598): prog-id=189 op=LOAD Jan 24 00:50:47.450000 audit: BPF prog-id=189 op=LOAD Jan 24 00:50:47.494356 kernel: audit: type=1300 audit(1769215847.450:598): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3916 pid=3927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:47.450000 audit[3927]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3916 pid=3927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:47.450000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237353039323233623362353764373966616232336238303036343564 Jan 24 00:50:47.500522 kernel: audit: type=1327 audit(1769215847.450:598): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237353039323233623362353764373966616232336238303036343564 Jan 24 00:50:47.502610 containerd[1593]: time="2026-01-24T00:50:47.502574801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57bd667b79-zp7m5,Uid:6b257eba-5f3e-4c58-93ff-911ee6aa75db,Namespace:calico-system,Attempt:0,} returns sandbox id \"b7509223b3b57d79fab23b800645d13dfb505c7247842130261b66f259a48ad9\"" Jan 24 00:50:47.504229 containerd[1593]: time="2026-01-24T00:50:47.503881742Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:50:47.450000 audit: BPF prog-id=190 op=LOAD Jan 24 00:50:47.450000 audit[3927]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3916 pid=3927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:47.450000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237353039323233623362353764373966616232336238303036343564 Jan 24 00:50:47.450000 audit: BPF prog-id=190 op=UNLOAD Jan 24 00:50:47.450000 audit[3927]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3916 pid=3927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:47.450000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237353039323233623362353764373966616232336238303036343564 Jan 24 00:50:47.450000 audit: BPF prog-id=189 op=UNLOAD Jan 24 00:50:47.450000 audit[3927]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3916 pid=3927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:47.450000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237353039323233623362353764373966616232336238303036343564 Jan 24 00:50:47.450000 audit: BPF prog-id=191 op=LOAD Jan 24 00:50:47.450000 audit[3927]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3916 pid=3927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:47.450000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237353039323233623362353764373966616232336238303036343564 Jan 24 00:50:47.628776 containerd[1593]: time="2026-01-24T00:50:47.628009797Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:50:47.630183 containerd[1593]: time="2026-01-24T00:50:47.630138609Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:50:47.630183 containerd[1593]: time="2026-01-24T00:50:47.630162569Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 24 00:50:47.630402 kubelet[2807]: E0124 00:50:47.630360 2807 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:50:47.630455 kubelet[2807]: E0124 00:50:47.630406 2807 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:50:47.630513 kubelet[2807]: E0124 00:50:47.630487 2807 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-57bd667b79-zp7m5_calico-system(6b257eba-5f3e-4c58-93ff-911ee6aa75db): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:50:47.632318 containerd[1593]: time="2026-01-24T00:50:47.632288272Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:50:47.760505 containerd[1593]: time="2026-01-24T00:50:47.760437733Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:50:47.761391 containerd[1593]: time="2026-01-24T00:50:47.761298994Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:50:47.761391 containerd[1593]: time="2026-01-24T00:50:47.761337094Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 24 00:50:47.761753 kubelet[2807]: E0124 00:50:47.761703 2807 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:50:47.762332 kubelet[2807]: E0124 00:50:47.762286 2807 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:50:47.762493 kubelet[2807]: E0124 00:50:47.762398 2807 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-57bd667b79-zp7m5_calico-system(6b257eba-5f3e-4c58-93ff-911ee6aa75db): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:50:47.762493 kubelet[2807]: E0124 00:50:47.762446 2807 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-57bd667b79-zp7m5" podUID="6b257eba-5f3e-4c58-93ff-911ee6aa75db" Jan 24 00:50:47.822603 kubelet[2807]: I0124 00:50:47.822551 2807 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 00:50:47.824097 kubelet[2807]: E0124 00:50:47.824071 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 24 00:50:47.826882 kubelet[2807]: E0124 00:50:47.826292 2807 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-57bd667b79-zp7m5" podUID="6b257eba-5f3e-4c58-93ff-911ee6aa75db" Jan 24 00:50:47.893000 audit[4027]: NETFILTER_CFG table=filter:115 family=2 entries=22 op=nft_register_rule pid=4027 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:50:47.893000 audit[4027]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffc5ddb1820 a2=0 a3=7ffc5ddb180c items=0 ppid=2920 pid=4027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:47.893000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:50:47.896000 audit[4027]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=4027 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:50:47.896000 audit[4027]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc5ddb1820 a2=0 a3=0 items=0 ppid=2920 pid=4027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:47.896000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:50:48.700089 kubelet[2807]: I0124 00:50:48.700026 2807 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfe7191c-2a43-4c85-8819-687d02e5b78b" path="/var/lib/kubelet/pods/dfe7191c-2a43-4c85-8819-687d02e5b78b/volumes" Jan 24 00:50:48.829674 kubelet[2807]: E0124 00:50:48.828953 2807 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-57bd667b79-zp7m5" podUID="6b257eba-5f3e-4c58-93ff-911ee6aa75db" Jan 24 00:50:49.165068 systemd-networkd[1499]: calib0adeee4a39: Gained IPv6LL Jan 24 00:50:50.342485 kubelet[2807]: I0124 00:50:50.342318 2807 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 00:50:50.342929 kubelet[2807]: E0124 00:50:50.342732 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 24 00:50:50.380000 audit[4094]: NETFILTER_CFG table=filter:117 family=2 entries=21 op=nft_register_rule pid=4094 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:50:50.380000 audit[4094]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffead43aa30 a2=0 a3=7ffead43aa1c items=0 ppid=2920 pid=4094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:50.380000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:50:50.385000 audit[4094]: NETFILTER_CFG table=nat:118 family=2 entries=19 op=nft_register_chain pid=4094 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:50:50.385000 audit[4094]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffead43aa30 a2=0 a3=7ffead43aa1c items=0 ppid=2920 pid=4094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:50.385000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:50:50.830837 kubelet[2807]: E0124 00:50:50.830769 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 24 00:50:51.315000 audit: BPF prog-id=192 op=LOAD Jan 24 00:50:51.315000 audit[4120]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe4e9821f0 a2=98 a3=1fffffffffffffff items=0 ppid=4097 pid=4120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:51.315000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 24 00:50:51.315000 audit: BPF prog-id=192 op=UNLOAD Jan 24 00:50:51.315000 audit[4120]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe4e9821c0 a3=0 items=0 ppid=4097 pid=4120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:51.315000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 24 00:50:51.315000 audit: BPF prog-id=193 op=LOAD Jan 24 00:50:51.315000 audit[4120]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe4e9820d0 a2=94 a3=3 items=0 ppid=4097 pid=4120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:51.315000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 24 00:50:51.315000 audit: BPF prog-id=193 op=UNLOAD Jan 24 00:50:51.315000 audit[4120]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe4e9820d0 a2=94 a3=3 items=0 ppid=4097 pid=4120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:51.315000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 24 00:50:51.315000 audit: BPF prog-id=194 op=LOAD Jan 24 00:50:51.315000 audit[4120]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe4e982110 a2=94 a3=7ffe4e9822f0 items=0 ppid=4097 pid=4120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:51.315000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 24 00:50:51.315000 audit: BPF prog-id=194 op=UNLOAD Jan 24 00:50:51.315000 audit[4120]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe4e982110 a2=94 a3=7ffe4e9822f0 items=0 ppid=4097 pid=4120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:51.315000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 24 00:50:51.318000 audit: BPF prog-id=195 op=LOAD Jan 24 00:50:51.318000 audit[4121]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff68b3b600 a2=98 a3=3 items=0 ppid=4097 pid=4121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:51.318000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:50:51.318000 audit: BPF prog-id=195 op=UNLOAD Jan 24 00:50:51.318000 audit[4121]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff68b3b5d0 a3=0 items=0 ppid=4097 pid=4121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:51.318000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:50:51.319000 audit: BPF prog-id=196 op=LOAD Jan 24 00:50:51.319000 audit[4121]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff68b3b3f0 a2=94 a3=54428f items=0 ppid=4097 pid=4121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:51.319000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:50:51.319000 audit: BPF prog-id=196 op=UNLOAD Jan 24 00:50:51.319000 audit[4121]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff68b3b3f0 a2=94 a3=54428f items=0 ppid=4097 pid=4121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:51.319000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:50:51.319000 audit: BPF prog-id=197 op=LOAD Jan 24 00:50:51.319000 audit[4121]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff68b3b420 a2=94 a3=2 items=0 ppid=4097 pid=4121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:51.319000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:50:51.319000 audit: BPF prog-id=197 op=UNLOAD Jan 24 00:50:51.319000 audit[4121]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff68b3b420 a2=0 a3=2 items=0 ppid=4097 pid=4121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:51.319000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:50:51.585000 audit: BPF prog-id=198 op=LOAD Jan 24 00:50:51.585000 audit[4121]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff68b3b2e0 a2=94 a3=1 items=0 ppid=4097 pid=4121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:51.585000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:50:51.585000 audit: BPF prog-id=198 op=UNLOAD Jan 24 00:50:51.585000 audit[4121]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff68b3b2e0 a2=94 a3=1 items=0 ppid=4097 pid=4121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:51.585000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:50:51.594000 audit: BPF prog-id=199 op=LOAD Jan 24 00:50:51.594000 audit[4121]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff68b3b2d0 a2=94 a3=4 items=0 ppid=4097 pid=4121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:51.594000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:50:51.595000 audit: BPF prog-id=199 op=UNLOAD Jan 24 00:50:51.595000 audit[4121]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff68b3b2d0 a2=0 a3=4 items=0 ppid=4097 pid=4121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:51.595000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:50:51.595000 audit: BPF prog-id=200 op=LOAD Jan 24 00:50:51.595000 audit[4121]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff68b3b130 a2=94 a3=5 items=0 ppid=4097 pid=4121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:51.595000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:50:51.595000 audit: BPF prog-id=200 op=UNLOAD Jan 24 00:50:51.595000 audit[4121]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff68b3b130 a2=0 a3=5 items=0 ppid=4097 pid=4121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:51.595000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:50:51.595000 audit: BPF prog-id=201 op=LOAD Jan 24 00:50:51.595000 audit[4121]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff68b3b350 a2=94 a3=6 items=0 ppid=4097 pid=4121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:51.595000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:50:51.595000 audit: BPF prog-id=201 op=UNLOAD Jan 24 00:50:51.595000 audit[4121]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff68b3b350 a2=0 a3=6 items=0 ppid=4097 pid=4121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:51.595000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:50:51.595000 audit: BPF prog-id=202 op=LOAD Jan 24 00:50:51.595000 audit[4121]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff68b3ab00 a2=94 a3=88 items=0 ppid=4097 pid=4121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:51.595000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:50:51.595000 audit: BPF prog-id=203 op=LOAD Jan 24 00:50:51.595000 audit[4121]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7fff68b3a980 a2=94 a3=2 items=0 ppid=4097 pid=4121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:51.595000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:50:51.595000 audit: BPF prog-id=203 op=UNLOAD Jan 24 00:50:51.595000 audit[4121]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7fff68b3a9b0 a2=0 a3=7fff68b3aab0 items=0 ppid=4097 pid=4121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:51.595000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:50:51.596000 audit: BPF prog-id=202 op=UNLOAD Jan 24 00:50:51.596000 audit[4121]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=cac2d10 a2=0 a3=8d4243c3055e56d6 items=0 ppid=4097 pid=4121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:51.596000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:50:51.607000 audit: BPF prog-id=204 op=LOAD Jan 24 00:50:51.607000 audit[4152]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe99b660c0 a2=98 a3=1999999999999999 items=0 ppid=4097 pid=4152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:51.607000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 24 00:50:51.607000 audit: BPF prog-id=204 op=UNLOAD Jan 24 00:50:51.607000 audit[4152]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe99b66090 a3=0 items=0 ppid=4097 pid=4152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:51.607000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 24 00:50:51.607000 audit: BPF prog-id=205 op=LOAD Jan 24 00:50:51.607000 audit[4152]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe99b65fa0 a2=94 a3=ffff items=0 ppid=4097 pid=4152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:51.607000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 24 00:50:51.607000 audit: BPF prog-id=205 op=UNLOAD Jan 24 00:50:51.607000 audit[4152]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe99b65fa0 a2=94 a3=ffff items=0 ppid=4097 pid=4152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:51.607000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 24 00:50:51.607000 audit: BPF prog-id=206 op=LOAD Jan 24 00:50:51.607000 audit[4152]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe99b65fe0 a2=94 a3=7ffe99b661c0 items=0 ppid=4097 pid=4152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:51.607000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 24 00:50:51.607000 audit: BPF prog-id=206 op=UNLOAD Jan 24 00:50:51.607000 audit[4152]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe99b65fe0 a2=94 a3=7ffe99b661c0 items=0 ppid=4097 pid=4152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:51.607000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 24 00:50:51.681507 systemd-networkd[1499]: vxlan.calico: Link UP Jan 24 00:50:51.681517 systemd-networkd[1499]: vxlan.calico: Gained carrier Jan 24 00:50:51.709000 audit: BPF prog-id=207 op=LOAD Jan 24 00:50:51.709000 audit[4179]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe8cad5d90 a2=98 a3=0 items=0 ppid=4097 pid=4179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:51.709000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 00:50:51.710000 audit: BPF prog-id=207 op=UNLOAD Jan 24 00:50:51.710000 audit[4179]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe8cad5d60 a3=0 items=0 ppid=4097 pid=4179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:51.710000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 00:50:51.711000 audit: BPF prog-id=208 op=LOAD Jan 24 00:50:51.711000 audit[4179]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe8cad5ba0 a2=94 a3=54428f items=0 ppid=4097 pid=4179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:51.711000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 00:50:51.711000 audit: BPF prog-id=208 op=UNLOAD Jan 24 00:50:51.711000 audit[4179]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe8cad5ba0 a2=94 a3=54428f items=0 ppid=4097 pid=4179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:51.711000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 00:50:51.711000 audit: BPF prog-id=209 op=LOAD Jan 24 00:50:51.711000 audit[4179]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe8cad5bd0 a2=94 a3=2 items=0 ppid=4097 pid=4179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:51.711000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 00:50:51.711000 audit: BPF prog-id=209 op=UNLOAD Jan 24 00:50:51.711000 audit[4179]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe8cad5bd0 a2=0 a3=2 items=0 ppid=4097 pid=4179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:51.711000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 00:50:51.711000 audit: BPF prog-id=210 op=LOAD Jan 24 00:50:51.711000 audit[4179]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe8cad5980 a2=94 a3=4 items=0 ppid=4097 pid=4179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:51.711000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 00:50:51.711000 audit: BPF prog-id=210 op=UNLOAD Jan 24 00:50:51.711000 audit[4179]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffe8cad5980 a2=94 a3=4 items=0 ppid=4097 pid=4179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:51.711000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 00:50:51.711000 audit: BPF prog-id=211 op=LOAD Jan 24 00:50:51.711000 audit[4179]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe8cad5a80 a2=94 a3=7ffe8cad5c00 items=0 ppid=4097 pid=4179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:51.711000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 00:50:51.711000 audit: BPF prog-id=211 op=UNLOAD Jan 24 00:50:51.711000 audit[4179]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffe8cad5a80 a2=0 a3=7ffe8cad5c00 items=0 ppid=4097 pid=4179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:51.711000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 00:50:51.712000 audit: BPF prog-id=212 op=LOAD Jan 24 00:50:51.712000 audit[4179]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe8cad51b0 a2=94 a3=2 items=0 ppid=4097 pid=4179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:51.712000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 00:50:51.712000 audit: BPF prog-id=212 op=UNLOAD Jan 24 00:50:51.712000 audit[4179]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffe8cad51b0 a2=0 a3=2 items=0 ppid=4097 pid=4179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:51.712000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 00:50:51.712000 audit: BPF prog-id=213 op=LOAD Jan 24 00:50:51.712000 audit[4179]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe8cad52b0 a2=94 a3=30 items=0 ppid=4097 pid=4179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:51.712000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 00:50:51.724000 audit: BPF prog-id=214 op=LOAD Jan 24 00:50:51.724000 audit[4184]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd9a3b7cb0 a2=98 a3=0 items=0 ppid=4097 pid=4184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:51.724000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:50:51.724000 audit: BPF prog-id=214 op=UNLOAD Jan 24 00:50:51.724000 audit[4184]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd9a3b7c80 a3=0 items=0 ppid=4097 pid=4184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:51.724000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:50:51.724000 audit: BPF prog-id=215 op=LOAD Jan 24 00:50:51.724000 audit[4184]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd9a3b7aa0 a2=94 a3=54428f items=0 ppid=4097 pid=4184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:51.724000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:50:51.724000 audit: BPF prog-id=215 op=UNLOAD Jan 24 00:50:51.724000 audit[4184]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd9a3b7aa0 a2=94 a3=54428f items=0 ppid=4097 pid=4184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:51.724000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:50:51.724000 audit: BPF prog-id=216 op=LOAD Jan 24 00:50:51.724000 audit[4184]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd9a3b7ad0 a2=94 a3=2 items=0 ppid=4097 pid=4184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:51.724000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:50:51.724000 audit: BPF prog-id=216 op=UNLOAD Jan 24 00:50:51.724000 audit[4184]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd9a3b7ad0 a2=0 a3=2 items=0 ppid=4097 pid=4184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:51.724000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:50:51.906000 audit: BPF prog-id=217 op=LOAD Jan 24 00:50:51.906000 audit[4184]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd9a3b7990 a2=94 a3=1 items=0 ppid=4097 pid=4184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:51.906000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:50:51.906000 audit: BPF prog-id=217 op=UNLOAD Jan 24 00:50:51.906000 audit[4184]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd9a3b7990 a2=94 a3=1 items=0 ppid=4097 pid=4184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:51.906000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:50:51.915000 audit: BPF prog-id=218 op=LOAD Jan 24 00:50:51.915000 audit[4184]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd9a3b7980 a2=94 a3=4 items=0 ppid=4097 pid=4184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:51.915000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:50:51.915000 audit: BPF prog-id=218 op=UNLOAD Jan 24 00:50:51.915000 audit[4184]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffd9a3b7980 a2=0 a3=4 items=0 ppid=4097 pid=4184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:51.915000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:50:51.915000 audit: BPF prog-id=219 op=LOAD Jan 24 00:50:51.915000 audit[4184]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd9a3b77e0 a2=94 a3=5 items=0 ppid=4097 pid=4184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:51.915000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:50:51.915000 audit: BPF prog-id=219 op=UNLOAD Jan 24 00:50:51.915000 audit[4184]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd9a3b77e0 a2=0 a3=5 items=0 ppid=4097 pid=4184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:51.915000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:50:51.916000 audit: BPF prog-id=220 op=LOAD Jan 24 00:50:51.916000 audit[4184]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd9a3b7a00 a2=94 a3=6 items=0 ppid=4097 pid=4184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:51.916000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:50:51.916000 audit: BPF prog-id=220 op=UNLOAD Jan 24 00:50:51.916000 audit[4184]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffd9a3b7a00 a2=0 a3=6 items=0 ppid=4097 pid=4184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:51.916000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:50:51.916000 audit: BPF prog-id=221 op=LOAD Jan 24 00:50:51.916000 audit[4184]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd9a3b71b0 a2=94 a3=88 items=0 ppid=4097 pid=4184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:51.916000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:50:51.916000 audit: BPF prog-id=222 op=LOAD Jan 24 00:50:51.916000 audit[4184]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffd9a3b7030 a2=94 a3=2 items=0 ppid=4097 pid=4184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:51.916000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:50:51.916000 audit: BPF prog-id=222 op=UNLOAD Jan 24 00:50:51.916000 audit[4184]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffd9a3b7060 a2=0 a3=7ffd9a3b7160 items=0 ppid=4097 pid=4184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:51.916000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:50:51.917000 audit: BPF prog-id=221 op=UNLOAD Jan 24 00:50:51.917000 audit[4184]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=3ecf0d10 a2=0 a3=cafb741aa2d349e9 items=0 ppid=4097 pid=4184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:51.917000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:50:51.928000 audit: BPF prog-id=213 op=UNLOAD Jan 24 00:50:51.928000 audit[4097]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c00091c400 a2=0 a3=0 items=0 ppid=3958 pid=4097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:51.928000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 24 00:50:51.981000 audit[4205]: NETFILTER_CFG table=raw:119 family=2 entries=21 op=nft_register_chain pid=4205 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 24 00:50:51.981000 audit[4205]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffd04cb4d70 a2=0 a3=7ffd04cb4d5c items=0 ppid=4097 pid=4205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:51.981000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 24 00:50:52.001000 audit[4212]: NETFILTER_CFG table=nat:120 family=2 entries=15 op=nft_register_chain pid=4212 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 24 00:50:52.001000 audit[4212]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7fff2b6d37c0 a2=0 a3=7fff2b6d37ac items=0 ppid=4097 pid=4212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:52.001000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 24 00:50:52.001000 audit[4213]: NETFILTER_CFG table=mangle:121 family=2 entries=16 op=nft_register_chain pid=4213 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 24 00:50:52.001000 audit[4213]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffce2719ef0 a2=0 a3=7ffce2719edc items=0 ppid=4097 pid=4213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:52.001000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 24 00:50:52.012000 audit[4214]: NETFILTER_CFG table=filter:122 family=2 entries=94 op=nft_register_chain pid=4214 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 24 00:50:52.012000 audit[4214]: SYSCALL arch=c000003e syscall=46 success=yes exit=53116 a0=3 a1=7ffdfba15550 a2=0 a3=7ffdfba1553c items=0 ppid=4097 pid=4214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:52.012000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 24 00:50:53.198212 systemd-networkd[1499]: vxlan.calico: Gained IPv6LL Jan 24 00:50:53.699993 containerd[1593]: time="2026-01-24T00:50:53.699935809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-588dc98d7b-7slqw,Uid:7daaf6bf-447d-4008-8095-834f38ae42be,Namespace:calico-apiserver,Attempt:0,}" Jan 24 00:50:53.879043 systemd-networkd[1499]: cali34314097d5a: Link UP Jan 24 00:50:53.879904 systemd-networkd[1499]: cali34314097d5a: Gained carrier Jan 24 00:50:53.897763 containerd[1593]: 2026-01-24 00:50:53.776 [INFO][4224] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--50--209-k8s-calico--apiserver--588dc98d7b--7slqw-eth0 calico-apiserver-588dc98d7b- calico-apiserver 7daaf6bf-447d-4008-8095-834f38ae42be 808 0 2026-01-24 00:50:32 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:588dc98d7b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-239-50-209 calico-apiserver-588dc98d7b-7slqw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali34314097d5a [] [] }} ContainerID="d95075d2c21115f7a4a47710a0423bd7fd9fd0fa1718d4c9d7e2a8c97e1f1367" Namespace="calico-apiserver" Pod="calico-apiserver-588dc98d7b-7slqw" WorkloadEndpoint="172--239--50--209-k8s-calico--apiserver--588dc98d7b--7slqw-" Jan 24 00:50:53.897763 containerd[1593]: 2026-01-24 00:50:53.776 [INFO][4224] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d95075d2c21115f7a4a47710a0423bd7fd9fd0fa1718d4c9d7e2a8c97e1f1367" Namespace="calico-apiserver" Pod="calico-apiserver-588dc98d7b-7slqw" WorkloadEndpoint="172--239--50--209-k8s-calico--apiserver--588dc98d7b--7slqw-eth0" Jan 24 00:50:53.897763 containerd[1593]: 2026-01-24 00:50:53.826 [INFO][4238] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d95075d2c21115f7a4a47710a0423bd7fd9fd0fa1718d4c9d7e2a8c97e1f1367" HandleID="k8s-pod-network.d95075d2c21115f7a4a47710a0423bd7fd9fd0fa1718d4c9d7e2a8c97e1f1367" Workload="172--239--50--209-k8s-calico--apiserver--588dc98d7b--7slqw-eth0" Jan 24 00:50:53.898291 containerd[1593]: 2026-01-24 00:50:53.829 [INFO][4238] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d95075d2c21115f7a4a47710a0423bd7fd9fd0fa1718d4c9d7e2a8c97e1f1367" HandleID="k8s-pod-network.d95075d2c21115f7a4a47710a0423bd7fd9fd0fa1718d4c9d7e2a8c97e1f1367" Workload="172--239--50--209-k8s-calico--apiserver--588dc98d7b--7slqw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f1c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-239-50-209", "pod":"calico-apiserver-588dc98d7b-7slqw", "timestamp":"2026-01-24 00:50:53.826448982 +0000 UTC"}, Hostname:"172-239-50-209", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:50:53.898291 containerd[1593]: 2026-01-24 00:50:53.829 [INFO][4238] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:50:53.898291 containerd[1593]: 2026-01-24 00:50:53.829 [INFO][4238] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:50:53.898291 containerd[1593]: 2026-01-24 00:50:53.829 [INFO][4238] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-50-209' Jan 24 00:50:53.898291 containerd[1593]: 2026-01-24 00:50:53.842 [INFO][4238] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d95075d2c21115f7a4a47710a0423bd7fd9fd0fa1718d4c9d7e2a8c97e1f1367" host="172-239-50-209" Jan 24 00:50:53.898291 containerd[1593]: 2026-01-24 00:50:53.847 [INFO][4238] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-50-209" Jan 24 00:50:53.898291 containerd[1593]: 2026-01-24 00:50:53.853 [INFO][4238] ipam/ipam.go 511: Trying affinity for 192.168.113.0/26 host="172-239-50-209" Jan 24 00:50:53.898291 containerd[1593]: 2026-01-24 00:50:53.855 [INFO][4238] ipam/ipam.go 158: Attempting to load block cidr=192.168.113.0/26 host="172-239-50-209" Jan 24 00:50:53.898291 containerd[1593]: 2026-01-24 00:50:53.857 [INFO][4238] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.113.0/26 host="172-239-50-209" Jan 24 00:50:53.898484 containerd[1593]: 2026-01-24 00:50:53.857 [INFO][4238] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.113.0/26 handle="k8s-pod-network.d95075d2c21115f7a4a47710a0423bd7fd9fd0fa1718d4c9d7e2a8c97e1f1367" host="172-239-50-209" Jan 24 00:50:53.898484 containerd[1593]: 2026-01-24 00:50:53.859 [INFO][4238] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d95075d2c21115f7a4a47710a0423bd7fd9fd0fa1718d4c9d7e2a8c97e1f1367 Jan 24 00:50:53.898484 containerd[1593]: 2026-01-24 00:50:53.863 [INFO][4238] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.113.0/26 handle="k8s-pod-network.d95075d2c21115f7a4a47710a0423bd7fd9fd0fa1718d4c9d7e2a8c97e1f1367" host="172-239-50-209" Jan 24 00:50:53.898484 containerd[1593]: 2026-01-24 00:50:53.868 [INFO][4238] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.113.2/26] block=192.168.113.0/26 handle="k8s-pod-network.d95075d2c21115f7a4a47710a0423bd7fd9fd0fa1718d4c9d7e2a8c97e1f1367" host="172-239-50-209" Jan 24 00:50:53.898484 containerd[1593]: 2026-01-24 00:50:53.868 [INFO][4238] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.113.2/26] handle="k8s-pod-network.d95075d2c21115f7a4a47710a0423bd7fd9fd0fa1718d4c9d7e2a8c97e1f1367" host="172-239-50-209" Jan 24 00:50:53.898484 containerd[1593]: 2026-01-24 00:50:53.868 [INFO][4238] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:50:53.898484 containerd[1593]: 2026-01-24 00:50:53.868 [INFO][4238] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.113.2/26] IPv6=[] ContainerID="d95075d2c21115f7a4a47710a0423bd7fd9fd0fa1718d4c9d7e2a8c97e1f1367" HandleID="k8s-pod-network.d95075d2c21115f7a4a47710a0423bd7fd9fd0fa1718d4c9d7e2a8c97e1f1367" Workload="172--239--50--209-k8s-calico--apiserver--588dc98d7b--7slqw-eth0" Jan 24 00:50:53.898619 containerd[1593]: 2026-01-24 00:50:53.872 [INFO][4224] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d95075d2c21115f7a4a47710a0423bd7fd9fd0fa1718d4c9d7e2a8c97e1f1367" Namespace="calico-apiserver" Pod="calico-apiserver-588dc98d7b-7slqw" WorkloadEndpoint="172--239--50--209-k8s-calico--apiserver--588dc98d7b--7slqw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--50--209-k8s-calico--apiserver--588dc98d7b--7slqw-eth0", GenerateName:"calico-apiserver-588dc98d7b-", Namespace:"calico-apiserver", SelfLink:"", UID:"7daaf6bf-447d-4008-8095-834f38ae42be", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 50, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"588dc98d7b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-50-209", ContainerID:"", Pod:"calico-apiserver-588dc98d7b-7slqw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.113.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali34314097d5a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:50:53.898674 containerd[1593]: 2026-01-24 00:50:53.872 [INFO][4224] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.113.2/32] ContainerID="d95075d2c21115f7a4a47710a0423bd7fd9fd0fa1718d4c9d7e2a8c97e1f1367" Namespace="calico-apiserver" Pod="calico-apiserver-588dc98d7b-7slqw" WorkloadEndpoint="172--239--50--209-k8s-calico--apiserver--588dc98d7b--7slqw-eth0" Jan 24 00:50:53.898674 containerd[1593]: 2026-01-24 00:50:53.872 [INFO][4224] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali34314097d5a ContainerID="d95075d2c21115f7a4a47710a0423bd7fd9fd0fa1718d4c9d7e2a8c97e1f1367" Namespace="calico-apiserver" Pod="calico-apiserver-588dc98d7b-7slqw" WorkloadEndpoint="172--239--50--209-k8s-calico--apiserver--588dc98d7b--7slqw-eth0" Jan 24 00:50:53.898674 containerd[1593]: 2026-01-24 00:50:53.881 [INFO][4224] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d95075d2c21115f7a4a47710a0423bd7fd9fd0fa1718d4c9d7e2a8c97e1f1367" Namespace="calico-apiserver" Pod="calico-apiserver-588dc98d7b-7slqw" WorkloadEndpoint="172--239--50--209-k8s-calico--apiserver--588dc98d7b--7slqw-eth0" Jan 24 00:50:53.898735 containerd[1593]: 2026-01-24 00:50:53.881 [INFO][4224] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d95075d2c21115f7a4a47710a0423bd7fd9fd0fa1718d4c9d7e2a8c97e1f1367" Namespace="calico-apiserver" Pod="calico-apiserver-588dc98d7b-7slqw" WorkloadEndpoint="172--239--50--209-k8s-calico--apiserver--588dc98d7b--7slqw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--50--209-k8s-calico--apiserver--588dc98d7b--7slqw-eth0", GenerateName:"calico-apiserver-588dc98d7b-", Namespace:"calico-apiserver", SelfLink:"", UID:"7daaf6bf-447d-4008-8095-834f38ae42be", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 50, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"588dc98d7b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-50-209", ContainerID:"d95075d2c21115f7a4a47710a0423bd7fd9fd0fa1718d4c9d7e2a8c97e1f1367", Pod:"calico-apiserver-588dc98d7b-7slqw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.113.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali34314097d5a", MAC:"96:41:50:22:ce:13", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:50:53.898788 containerd[1593]: 2026-01-24 00:50:53.892 [INFO][4224] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d95075d2c21115f7a4a47710a0423bd7fd9fd0fa1718d4c9d7e2a8c97e1f1367" Namespace="calico-apiserver" Pod="calico-apiserver-588dc98d7b-7slqw" WorkloadEndpoint="172--239--50--209-k8s-calico--apiserver--588dc98d7b--7slqw-eth0" Jan 24 00:50:53.922000 audit[4251]: NETFILTER_CFG table=filter:123 family=2 entries=50 op=nft_register_chain pid=4251 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 24 00:50:53.925815 kernel: kauditd_printk_skb: 222 callbacks suppressed Jan 24 00:50:53.925902 kernel: audit: type=1325 audit(1769215853.922:673): table=filter:123 family=2 entries=50 op=nft_register_chain pid=4251 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 24 00:50:53.922000 audit[4251]: SYSCALL arch=c000003e syscall=46 success=yes exit=28208 a0=3 a1=7ffffc68c6e0 a2=0 a3=7ffffc68c6cc items=0 ppid=4097 pid=4251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:53.942879 kernel: audit: type=1300 audit(1769215853.922:673): arch=c000003e syscall=46 success=yes exit=28208 a0=3 a1=7ffffc68c6e0 a2=0 a3=7ffffc68c6cc items=0 ppid=4097 pid=4251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:53.942979 kernel: audit: type=1327 audit(1769215853.922:673): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 24 00:50:53.922000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 24 00:50:53.943842 containerd[1593]: time="2026-01-24T00:50:53.943457027Z" level=info msg="connecting to shim d95075d2c21115f7a4a47710a0423bd7fd9fd0fa1718d4c9d7e2a8c97e1f1367" address="unix:///run/containerd/s/075f9f6aa060d958726e0dfa1f3eec7d6d3857b8089e51c0b96bb8bc6e7eee91" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:50:53.993125 systemd[1]: Started cri-containerd-d95075d2c21115f7a4a47710a0423bd7fd9fd0fa1718d4c9d7e2a8c97e1f1367.scope - libcontainer container d95075d2c21115f7a4a47710a0423bd7fd9fd0fa1718d4c9d7e2a8c97e1f1367. Jan 24 00:50:54.014000 audit: BPF prog-id=223 op=LOAD Jan 24 00:50:54.019848 kernel: audit: type=1334 audit(1769215854.014:674): prog-id=223 op=LOAD Jan 24 00:50:54.019923 kernel: audit: type=1334 audit(1769215854.015:675): prog-id=224 op=LOAD Jan 24 00:50:54.015000 audit: BPF prog-id=224 op=LOAD Jan 24 00:50:54.015000 audit[4272]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=4261 pid=4272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:54.035871 kernel: audit: type=1300 audit(1769215854.015:675): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=4261 pid=4272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:54.015000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439353037356432633231313135663761346134373731306130343233 Jan 24 00:50:54.049191 kernel: audit: type=1327 audit(1769215854.015:675): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439353037356432633231313135663761346134373731306130343233 Jan 24 00:50:54.049269 kernel: audit: type=1334 audit(1769215854.015:676): prog-id=224 op=UNLOAD Jan 24 00:50:54.015000 audit: BPF prog-id=224 op=UNLOAD Jan 24 00:50:54.015000 audit[4272]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4261 pid=4272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:54.080638 kernel: audit: type=1300 audit(1769215854.015:676): arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4261 pid=4272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:54.015000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439353037356432633231313135663761346134373731306130343233 Jan 24 00:50:54.094835 kernel: audit: type=1327 audit(1769215854.015:676): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439353037356432633231313135663761346134373731306130343233 Jan 24 00:50:54.015000 audit: BPF prog-id=225 op=LOAD Jan 24 00:50:54.015000 audit[4272]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=4261 pid=4272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:54.015000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439353037356432633231313135663761346134373731306130343233 Jan 24 00:50:54.015000 audit: BPF prog-id=226 op=LOAD Jan 24 00:50:54.015000 audit[4272]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=4261 pid=4272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:54.015000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439353037356432633231313135663761346134373731306130343233 Jan 24 00:50:54.015000 audit: BPF prog-id=226 op=UNLOAD Jan 24 00:50:54.015000 audit[4272]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4261 pid=4272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:54.015000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439353037356432633231313135663761346134373731306130343233 Jan 24 00:50:54.015000 audit: BPF prog-id=225 op=UNLOAD Jan 24 00:50:54.015000 audit[4272]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4261 pid=4272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:54.015000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439353037356432633231313135663761346134373731306130343233 Jan 24 00:50:54.015000 audit: BPF prog-id=227 op=LOAD Jan 24 00:50:54.015000 audit[4272]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=4261 pid=4272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:54.015000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439353037356432633231313135663761346134373731306130343233 Jan 24 00:50:54.105308 containerd[1593]: time="2026-01-24T00:50:54.105222145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-588dc98d7b-7slqw,Uid:7daaf6bf-447d-4008-8095-834f38ae42be,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"d95075d2c21115f7a4a47710a0423bd7fd9fd0fa1718d4c9d7e2a8c97e1f1367\"" Jan 24 00:50:54.108992 containerd[1593]: time="2026-01-24T00:50:54.108953909Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:50:54.246295 containerd[1593]: time="2026-01-24T00:50:54.245714969Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:50:54.246916 containerd[1593]: time="2026-01-24T00:50:54.246887600Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:50:54.246972 containerd[1593]: time="2026-01-24T00:50:54.246961311Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 24 00:50:54.247145 kubelet[2807]: E0124 00:50:54.247112 2807 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:50:54.247683 kubelet[2807]: E0124 00:50:54.247153 2807 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:50:54.247683 kubelet[2807]: E0124 00:50:54.247223 2807 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-588dc98d7b-7slqw_calico-apiserver(7daaf6bf-447d-4008-8095-834f38ae42be): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:50:54.247683 kubelet[2807]: E0124 00:50:54.247252 2807 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-588dc98d7b-7slqw" podUID="7daaf6bf-447d-4008-8095-834f38ae42be" Jan 24 00:50:54.701747 kubelet[2807]: E0124 00:50:54.701703 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 24 00:50:54.706833 containerd[1593]: time="2026-01-24T00:50:54.702928937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-qc8gl,Uid:b78a6970-784f-4d8d-b2e0-d7d30137d78a,Namespace:kube-system,Attempt:0,}" Jan 24 00:50:54.819954 systemd-networkd[1499]: cali402e55b24b1: Link UP Jan 24 00:50:54.821706 systemd-networkd[1499]: cali402e55b24b1: Gained carrier Jan 24 00:50:54.838437 containerd[1593]: 2026-01-24 00:50:54.747 [INFO][4299] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--50--209-k8s-coredns--66bc5c9577--qc8gl-eth0 coredns-66bc5c9577- kube-system b78a6970-784f-4d8d-b2e0-d7d30137d78a 812 0 2026-01-24 00:50:24 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-239-50-209 coredns-66bc5c9577-qc8gl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali402e55b24b1 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="64b7801f95a49d936b24130e267c87dec387a6bfc5adef513e928ecc75a7c984" Namespace="kube-system" Pod="coredns-66bc5c9577-qc8gl" WorkloadEndpoint="172--239--50--209-k8s-coredns--66bc5c9577--qc8gl-" Jan 24 00:50:54.838437 containerd[1593]: 2026-01-24 00:50:54.747 [INFO][4299] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="64b7801f95a49d936b24130e267c87dec387a6bfc5adef513e928ecc75a7c984" Namespace="kube-system" Pod="coredns-66bc5c9577-qc8gl" WorkloadEndpoint="172--239--50--209-k8s-coredns--66bc5c9577--qc8gl-eth0" Jan 24 00:50:54.838437 containerd[1593]: 2026-01-24 00:50:54.780 [INFO][4314] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="64b7801f95a49d936b24130e267c87dec387a6bfc5adef513e928ecc75a7c984" HandleID="k8s-pod-network.64b7801f95a49d936b24130e267c87dec387a6bfc5adef513e928ecc75a7c984" Workload="172--239--50--209-k8s-coredns--66bc5c9577--qc8gl-eth0" Jan 24 00:50:54.838576 containerd[1593]: 2026-01-24 00:50:54.780 [INFO][4314] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="64b7801f95a49d936b24130e267c87dec387a6bfc5adef513e928ecc75a7c984" HandleID="k8s-pod-network.64b7801f95a49d936b24130e267c87dec387a6bfc5adef513e928ecc75a7c984" Workload="172--239--50--209-k8s-coredns--66bc5c9577--qc8gl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cfbd0), Attrs:map[string]string{"namespace":"kube-system", "node":"172-239-50-209", "pod":"coredns-66bc5c9577-qc8gl", "timestamp":"2026-01-24 00:50:54.780364077 +0000 UTC"}, Hostname:"172-239-50-209", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:50:54.838576 containerd[1593]: 2026-01-24 00:50:54.780 [INFO][4314] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:50:54.838576 containerd[1593]: 2026-01-24 00:50:54.780 [INFO][4314] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:50:54.838576 containerd[1593]: 2026-01-24 00:50:54.780 [INFO][4314] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-50-209' Jan 24 00:50:54.838576 containerd[1593]: 2026-01-24 00:50:54.788 [INFO][4314] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.64b7801f95a49d936b24130e267c87dec387a6bfc5adef513e928ecc75a7c984" host="172-239-50-209" Jan 24 00:50:54.838576 containerd[1593]: 2026-01-24 00:50:54.793 [INFO][4314] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-50-209" Jan 24 00:50:54.838576 containerd[1593]: 2026-01-24 00:50:54.798 [INFO][4314] ipam/ipam.go 511: Trying affinity for 192.168.113.0/26 host="172-239-50-209" Jan 24 00:50:54.838576 containerd[1593]: 2026-01-24 00:50:54.800 [INFO][4314] ipam/ipam.go 158: Attempting to load block cidr=192.168.113.0/26 host="172-239-50-209" Jan 24 00:50:54.838576 containerd[1593]: 2026-01-24 00:50:54.802 [INFO][4314] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.113.0/26 host="172-239-50-209" Jan 24 00:50:54.838576 containerd[1593]: 2026-01-24 00:50:54.802 [INFO][4314] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.113.0/26 handle="k8s-pod-network.64b7801f95a49d936b24130e267c87dec387a6bfc5adef513e928ecc75a7c984" host="172-239-50-209" Jan 24 00:50:54.838789 containerd[1593]: 2026-01-24 00:50:54.803 [INFO][4314] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.64b7801f95a49d936b24130e267c87dec387a6bfc5adef513e928ecc75a7c984 Jan 24 00:50:54.838789 containerd[1593]: 2026-01-24 00:50:54.807 [INFO][4314] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.113.0/26 handle="k8s-pod-network.64b7801f95a49d936b24130e267c87dec387a6bfc5adef513e928ecc75a7c984" host="172-239-50-209" Jan 24 00:50:54.838789 containerd[1593]: 2026-01-24 00:50:54.811 [INFO][4314] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.113.3/26] block=192.168.113.0/26 handle="k8s-pod-network.64b7801f95a49d936b24130e267c87dec387a6bfc5adef513e928ecc75a7c984" host="172-239-50-209" Jan 24 00:50:54.838789 containerd[1593]: 2026-01-24 00:50:54.811 [INFO][4314] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.113.3/26] handle="k8s-pod-network.64b7801f95a49d936b24130e267c87dec387a6bfc5adef513e928ecc75a7c984" host="172-239-50-209" Jan 24 00:50:54.838789 containerd[1593]: 2026-01-24 00:50:54.811 [INFO][4314] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:50:54.838789 containerd[1593]: 2026-01-24 00:50:54.811 [INFO][4314] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.113.3/26] IPv6=[] ContainerID="64b7801f95a49d936b24130e267c87dec387a6bfc5adef513e928ecc75a7c984" HandleID="k8s-pod-network.64b7801f95a49d936b24130e267c87dec387a6bfc5adef513e928ecc75a7c984" Workload="172--239--50--209-k8s-coredns--66bc5c9577--qc8gl-eth0" Jan 24 00:50:54.838989 containerd[1593]: 2026-01-24 00:50:54.816 [INFO][4299] cni-plugin/k8s.go 418: Populated endpoint ContainerID="64b7801f95a49d936b24130e267c87dec387a6bfc5adef513e928ecc75a7c984" Namespace="kube-system" Pod="coredns-66bc5c9577-qc8gl" WorkloadEndpoint="172--239--50--209-k8s-coredns--66bc5c9577--qc8gl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--50--209-k8s-coredns--66bc5c9577--qc8gl-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"b78a6970-784f-4d8d-b2e0-d7d30137d78a", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 50, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-50-209", ContainerID:"", Pod:"coredns-66bc5c9577-qc8gl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.113.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali402e55b24b1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:50:54.838989 containerd[1593]: 2026-01-24 00:50:54.816 [INFO][4299] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.113.3/32] ContainerID="64b7801f95a49d936b24130e267c87dec387a6bfc5adef513e928ecc75a7c984" Namespace="kube-system" Pod="coredns-66bc5c9577-qc8gl" WorkloadEndpoint="172--239--50--209-k8s-coredns--66bc5c9577--qc8gl-eth0" Jan 24 00:50:54.838989 containerd[1593]: 2026-01-24 00:50:54.816 [INFO][4299] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali402e55b24b1 ContainerID="64b7801f95a49d936b24130e267c87dec387a6bfc5adef513e928ecc75a7c984" Namespace="kube-system" Pod="coredns-66bc5c9577-qc8gl" WorkloadEndpoint="172--239--50--209-k8s-coredns--66bc5c9577--qc8gl-eth0" Jan 24 00:50:54.838989 containerd[1593]: 2026-01-24 00:50:54.820 [INFO][4299] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="64b7801f95a49d936b24130e267c87dec387a6bfc5adef513e928ecc75a7c984" Namespace="kube-system" Pod="coredns-66bc5c9577-qc8gl" WorkloadEndpoint="172--239--50--209-k8s-coredns--66bc5c9577--qc8gl-eth0" Jan 24 00:50:54.838989 containerd[1593]: 2026-01-24 00:50:54.822 [INFO][4299] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="64b7801f95a49d936b24130e267c87dec387a6bfc5adef513e928ecc75a7c984" Namespace="kube-system" Pod="coredns-66bc5c9577-qc8gl" WorkloadEndpoint="172--239--50--209-k8s-coredns--66bc5c9577--qc8gl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--50--209-k8s-coredns--66bc5c9577--qc8gl-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"b78a6970-784f-4d8d-b2e0-d7d30137d78a", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 50, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-50-209", ContainerID:"64b7801f95a49d936b24130e267c87dec387a6bfc5adef513e928ecc75a7c984", Pod:"coredns-66bc5c9577-qc8gl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.113.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali402e55b24b1", MAC:"de:7f:56:1a:a9:2b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:50:54.838989 containerd[1593]: 2026-01-24 00:50:54.832 [INFO][4299] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="64b7801f95a49d936b24130e267c87dec387a6bfc5adef513e928ecc75a7c984" Namespace="kube-system" Pod="coredns-66bc5c9577-qc8gl" WorkloadEndpoint="172--239--50--209-k8s-coredns--66bc5c9577--qc8gl-eth0" Jan 24 00:50:54.850202 kubelet[2807]: E0124 00:50:54.850152 2807 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-588dc98d7b-7slqw" podUID="7daaf6bf-447d-4008-8095-834f38ae42be" Jan 24 00:50:54.876295 containerd[1593]: time="2026-01-24T00:50:54.876168465Z" level=info msg="connecting to shim 64b7801f95a49d936b24130e267c87dec387a6bfc5adef513e928ecc75a7c984" address="unix:///run/containerd/s/ba5bb3f196d9034d7db6257877770eccebd1b5d96bb134ece6ed2a4f3aae0282" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:50:54.892000 audit[4346]: NETFILTER_CFG table=filter:124 family=2 entries=46 op=nft_register_chain pid=4346 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 24 00:50:54.892000 audit[4346]: SYSCALL arch=c000003e syscall=46 success=yes exit=23740 a0=3 a1=7ffcd0ffb430 a2=0 a3=7ffcd0ffb41c items=0 ppid=4097 pid=4346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:54.892000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 24 00:50:54.906000 audit[4363]: NETFILTER_CFG table=filter:125 family=2 entries=20 op=nft_register_rule pid=4363 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:50:54.906000 audit[4363]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffef0be3710 a2=0 a3=7ffef0be36fc items=0 ppid=2920 pid=4363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:54.906000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:50:54.909000 audit[4363]: NETFILTER_CFG table=nat:126 family=2 entries=14 op=nft_register_rule pid=4363 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:50:54.909000 audit[4363]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffef0be3710 a2=0 a3=0 items=0 ppid=2920 pid=4363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:54.909000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:50:54.912148 systemd[1]: Started cri-containerd-64b7801f95a49d936b24130e267c87dec387a6bfc5adef513e928ecc75a7c984.scope - libcontainer container 64b7801f95a49d936b24130e267c87dec387a6bfc5adef513e928ecc75a7c984. Jan 24 00:50:54.927000 audit: BPF prog-id=228 op=LOAD Jan 24 00:50:54.929000 audit: BPF prog-id=229 op=LOAD Jan 24 00:50:54.929000 audit[4350]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=4336 pid=4350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:54.929000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634623738303166393561343964393336623234313330653236376338 Jan 24 00:50:54.929000 audit: BPF prog-id=229 op=UNLOAD Jan 24 00:50:54.929000 audit[4350]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4336 pid=4350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:54.929000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634623738303166393561343964393336623234313330653236376338 Jan 24 00:50:54.929000 audit: BPF prog-id=230 op=LOAD Jan 24 00:50:54.929000 audit[4350]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4336 pid=4350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:54.929000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634623738303166393561343964393336623234313330653236376338 Jan 24 00:50:54.929000 audit: BPF prog-id=231 op=LOAD Jan 24 00:50:54.929000 audit[4350]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4336 pid=4350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:54.929000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634623738303166393561343964393336623234313330653236376338 Jan 24 00:50:54.929000 audit: BPF prog-id=231 op=UNLOAD Jan 24 00:50:54.929000 audit[4350]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4336 pid=4350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:54.929000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634623738303166393561343964393336623234313330653236376338 Jan 24 00:50:54.929000 audit: BPF prog-id=230 op=UNLOAD Jan 24 00:50:54.929000 audit[4350]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4336 pid=4350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:54.929000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634623738303166393561343964393336623234313330653236376338 Jan 24 00:50:54.930000 audit: BPF prog-id=232 op=LOAD Jan 24 00:50:54.930000 audit[4350]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=4336 pid=4350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:54.930000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634623738303166393561343964393336623234313330653236376338 Jan 24 00:50:54.972267 containerd[1593]: time="2026-01-24T00:50:54.972129133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-qc8gl,Uid:b78a6970-784f-4d8d-b2e0-d7d30137d78a,Namespace:kube-system,Attempt:0,} returns sandbox id \"64b7801f95a49d936b24130e267c87dec387a6bfc5adef513e928ecc75a7c984\"" Jan 24 00:50:54.974045 kubelet[2807]: E0124 00:50:54.974023 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 24 00:50:54.978424 containerd[1593]: time="2026-01-24T00:50:54.978384010Z" level=info msg="CreateContainer within sandbox \"64b7801f95a49d936b24130e267c87dec387a6bfc5adef513e928ecc75a7c984\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 24 00:50:54.991168 containerd[1593]: time="2026-01-24T00:50:54.990536002Z" level=info msg="Container bf0d982f25e27a1c6d0d139010725a8270134bad7af247835d1a6d33e920ec27: CDI devices from CRI Config.CDIDevices: []" Jan 24 00:50:54.996943 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2291787293.mount: Deactivated successfully. Jan 24 00:50:55.001687 containerd[1593]: time="2026-01-24T00:50:55.001655404Z" level=info msg="CreateContainer within sandbox \"64b7801f95a49d936b24130e267c87dec387a6bfc5adef513e928ecc75a7c984\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bf0d982f25e27a1c6d0d139010725a8270134bad7af247835d1a6d33e920ec27\"" Jan 24 00:50:55.002583 containerd[1593]: time="2026-01-24T00:50:55.002545724Z" level=info msg="StartContainer for \"bf0d982f25e27a1c6d0d139010725a8270134bad7af247835d1a6d33e920ec27\"" Jan 24 00:50:55.004615 containerd[1593]: time="2026-01-24T00:50:55.004580046Z" level=info msg="connecting to shim bf0d982f25e27a1c6d0d139010725a8270134bad7af247835d1a6d33e920ec27" address="unix:///run/containerd/s/ba5bb3f196d9034d7db6257877770eccebd1b5d96bb134ece6ed2a4f3aae0282" protocol=ttrpc version=3 Jan 24 00:50:55.032005 systemd[1]: Started cri-containerd-bf0d982f25e27a1c6d0d139010725a8270134bad7af247835d1a6d33e920ec27.scope - libcontainer container bf0d982f25e27a1c6d0d139010725a8270134bad7af247835d1a6d33e920ec27. Jan 24 00:50:55.046000 audit: BPF prog-id=233 op=LOAD Jan 24 00:50:55.047000 audit: BPF prog-id=234 op=LOAD Jan 24 00:50:55.047000 audit[4378]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=4336 pid=4378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:55.047000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266306439383266323565323761316336643064313339303130373235 Jan 24 00:50:55.047000 audit: BPF prog-id=234 op=UNLOAD Jan 24 00:50:55.047000 audit[4378]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4336 pid=4378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:55.047000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266306439383266323565323761316336643064313339303130373235 Jan 24 00:50:55.047000 audit: BPF prog-id=235 op=LOAD Jan 24 00:50:55.047000 audit[4378]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=4336 pid=4378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:55.047000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266306439383266323565323761316336643064313339303130373235 Jan 24 00:50:55.047000 audit: BPF prog-id=236 op=LOAD Jan 24 00:50:55.047000 audit[4378]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=4336 pid=4378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:55.047000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266306439383266323565323761316336643064313339303130373235 Jan 24 00:50:55.047000 audit: BPF prog-id=236 op=UNLOAD Jan 24 00:50:55.047000 audit[4378]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4336 pid=4378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:55.047000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266306439383266323565323761316336643064313339303130373235 Jan 24 00:50:55.047000 audit: BPF prog-id=235 op=UNLOAD Jan 24 00:50:55.047000 audit[4378]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4336 pid=4378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:55.047000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266306439383266323565323761316336643064313339303130373235 Jan 24 00:50:55.047000 audit: BPF prog-id=237 op=LOAD Jan 24 00:50:55.047000 audit[4378]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=4336 pid=4378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:55.047000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266306439383266323565323761316336643064313339303130373235 Jan 24 00:50:55.070975 containerd[1593]: time="2026-01-24T00:50:55.070892612Z" level=info msg="StartContainer for \"bf0d982f25e27a1c6d0d139010725a8270134bad7af247835d1a6d33e920ec27\" returns successfully" Jan 24 00:50:55.821280 systemd-networkd[1499]: cali34314097d5a: Gained IPv6LL Jan 24 00:50:55.851567 kubelet[2807]: E0124 00:50:55.851448 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 24 00:50:55.852782 kubelet[2807]: E0124 00:50:55.852289 2807 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-588dc98d7b-7slqw" podUID="7daaf6bf-447d-4008-8095-834f38ae42be" Jan 24 00:50:55.871710 kubelet[2807]: I0124 00:50:55.871501 2807 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-qc8gl" podStartSLOduration=31.871487827 podStartE2EDuration="31.871487827s" podCreationTimestamp="2026-01-24 00:50:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:50:55.870914667 +0000 UTC m=+37.271626483" watchObservedRunningTime="2026-01-24 00:50:55.871487827 +0000 UTC m=+37.272199643" Jan 24 00:50:55.924000 audit[4409]: NETFILTER_CFG table=filter:127 family=2 entries=17 op=nft_register_rule pid=4409 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:50:55.924000 audit[4409]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffdfc2354f0 a2=0 a3=7ffdfc2354dc items=0 ppid=2920 pid=4409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:55.924000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:50:55.928000 audit[4409]: NETFILTER_CFG table=nat:128 family=2 entries=35 op=nft_register_chain pid=4409 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:50:55.928000 audit[4409]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffdfc2354f0 a2=0 a3=7ffdfc2354dc items=0 ppid=2920 pid=4409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:55.928000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:50:56.699914 kubelet[2807]: E0124 00:50:56.699774 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 24 00:50:56.702266 containerd[1593]: time="2026-01-24T00:50:56.701142479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-khdzd,Uid:b9b2bf92-1ff5-4c12-9c42-3933e6be5da9,Namespace:kube-system,Attempt:0,}" Jan 24 00:50:56.703117 containerd[1593]: time="2026-01-24T00:50:56.701317049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-rczw4,Uid:ed711851-0b36-4791-909e-763bb4210e62,Namespace:calico-system,Attempt:0,}" Jan 24 00:50:56.721190 systemd-networkd[1499]: cali402e55b24b1: Gained IPv6LL Jan 24 00:50:56.854246 kubelet[2807]: E0124 00:50:56.854191 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 24 00:50:56.889426 systemd-networkd[1499]: cali4597d9c16d4: Link UP Jan 24 00:50:56.891634 systemd-networkd[1499]: cali4597d9c16d4: Gained carrier Jan 24 00:50:56.902792 containerd[1593]: 2026-01-24 00:50:56.800 [INFO][4413] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--50--209-k8s-goldmane--7c778bb748--rczw4-eth0 goldmane-7c778bb748- calico-system ed711851-0b36-4791-909e-763bb4210e62 815 0 2026-01-24 00:50:34 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s 172-239-50-209 goldmane-7c778bb748-rczw4 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali4597d9c16d4 [] [] }} ContainerID="9b53c2f293930415d1f83b9a9ad820de432aa5b609efa407f1ab19b2bcf0b351" Namespace="calico-system" Pod="goldmane-7c778bb748-rczw4" WorkloadEndpoint="172--239--50--209-k8s-goldmane--7c778bb748--rczw4-" Jan 24 00:50:56.902792 containerd[1593]: 2026-01-24 00:50:56.800 [INFO][4413] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9b53c2f293930415d1f83b9a9ad820de432aa5b609efa407f1ab19b2bcf0b351" Namespace="calico-system" Pod="goldmane-7c778bb748-rczw4" WorkloadEndpoint="172--239--50--209-k8s-goldmane--7c778bb748--rczw4-eth0" Jan 24 00:50:56.902792 containerd[1593]: 2026-01-24 00:50:56.842 [INFO][4442] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9b53c2f293930415d1f83b9a9ad820de432aa5b609efa407f1ab19b2bcf0b351" HandleID="k8s-pod-network.9b53c2f293930415d1f83b9a9ad820de432aa5b609efa407f1ab19b2bcf0b351" Workload="172--239--50--209-k8s-goldmane--7c778bb748--rczw4-eth0" Jan 24 00:50:56.902792 containerd[1593]: 2026-01-24 00:50:56.842 [INFO][4442] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9b53c2f293930415d1f83b9a9ad820de432aa5b609efa407f1ab19b2bcf0b351" HandleID="k8s-pod-network.9b53c2f293930415d1f83b9a9ad820de432aa5b609efa407f1ab19b2bcf0b351" Workload="172--239--50--209-k8s-goldmane--7c778bb748--rczw4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d56d0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-239-50-209", "pod":"goldmane-7c778bb748-rczw4", "timestamp":"2026-01-24 00:50:56.842016944 +0000 UTC"}, Hostname:"172-239-50-209", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:50:56.902792 containerd[1593]: 2026-01-24 00:50:56.842 [INFO][4442] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:50:56.902792 containerd[1593]: 2026-01-24 00:50:56.842 [INFO][4442] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:50:56.902792 containerd[1593]: 2026-01-24 00:50:56.842 [INFO][4442] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-50-209' Jan 24 00:50:56.902792 containerd[1593]: 2026-01-24 00:50:56.848 [INFO][4442] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9b53c2f293930415d1f83b9a9ad820de432aa5b609efa407f1ab19b2bcf0b351" host="172-239-50-209" Jan 24 00:50:56.902792 containerd[1593]: 2026-01-24 00:50:56.853 [INFO][4442] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-50-209" Jan 24 00:50:56.902792 containerd[1593]: 2026-01-24 00:50:56.858 [INFO][4442] ipam/ipam.go 511: Trying affinity for 192.168.113.0/26 host="172-239-50-209" Jan 24 00:50:56.902792 containerd[1593]: 2026-01-24 00:50:56.860 [INFO][4442] ipam/ipam.go 158: Attempting to load block cidr=192.168.113.0/26 host="172-239-50-209" Jan 24 00:50:56.902792 containerd[1593]: 2026-01-24 00:50:56.862 [INFO][4442] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.113.0/26 host="172-239-50-209" Jan 24 00:50:56.902792 containerd[1593]: 2026-01-24 00:50:56.862 [INFO][4442] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.113.0/26 handle="k8s-pod-network.9b53c2f293930415d1f83b9a9ad820de432aa5b609efa407f1ab19b2bcf0b351" host="172-239-50-209" Jan 24 00:50:56.902792 containerd[1593]: 2026-01-24 00:50:56.864 [INFO][4442] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9b53c2f293930415d1f83b9a9ad820de432aa5b609efa407f1ab19b2bcf0b351 Jan 24 00:50:56.902792 containerd[1593]: 2026-01-24 00:50:56.867 [INFO][4442] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.113.0/26 handle="k8s-pod-network.9b53c2f293930415d1f83b9a9ad820de432aa5b609efa407f1ab19b2bcf0b351" host="172-239-50-209" Jan 24 00:50:56.902792 containerd[1593]: 2026-01-24 00:50:56.878 [INFO][4442] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.113.4/26] block=192.168.113.0/26 handle="k8s-pod-network.9b53c2f293930415d1f83b9a9ad820de432aa5b609efa407f1ab19b2bcf0b351" host="172-239-50-209" Jan 24 00:50:56.902792 containerd[1593]: 2026-01-24 00:50:56.878 [INFO][4442] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.113.4/26] handle="k8s-pod-network.9b53c2f293930415d1f83b9a9ad820de432aa5b609efa407f1ab19b2bcf0b351" host="172-239-50-209" Jan 24 00:50:56.902792 containerd[1593]: 2026-01-24 00:50:56.880 [INFO][4442] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:50:56.902792 containerd[1593]: 2026-01-24 00:50:56.880 [INFO][4442] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.113.4/26] IPv6=[] ContainerID="9b53c2f293930415d1f83b9a9ad820de432aa5b609efa407f1ab19b2bcf0b351" HandleID="k8s-pod-network.9b53c2f293930415d1f83b9a9ad820de432aa5b609efa407f1ab19b2bcf0b351" Workload="172--239--50--209-k8s-goldmane--7c778bb748--rczw4-eth0" Jan 24 00:50:56.904867 containerd[1593]: 2026-01-24 00:50:56.884 [INFO][4413] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9b53c2f293930415d1f83b9a9ad820de432aa5b609efa407f1ab19b2bcf0b351" Namespace="calico-system" Pod="goldmane-7c778bb748-rczw4" WorkloadEndpoint="172--239--50--209-k8s-goldmane--7c778bb748--rczw4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--50--209-k8s-goldmane--7c778bb748--rczw4-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"ed711851-0b36-4791-909e-763bb4210e62", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 50, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-50-209", ContainerID:"", Pod:"goldmane-7c778bb748-rczw4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.113.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4597d9c16d4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:50:56.904867 containerd[1593]: 2026-01-24 00:50:56.884 [INFO][4413] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.113.4/32] ContainerID="9b53c2f293930415d1f83b9a9ad820de432aa5b609efa407f1ab19b2bcf0b351" Namespace="calico-system" Pod="goldmane-7c778bb748-rczw4" WorkloadEndpoint="172--239--50--209-k8s-goldmane--7c778bb748--rczw4-eth0" Jan 24 00:50:56.904867 containerd[1593]: 2026-01-24 00:50:56.884 [INFO][4413] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4597d9c16d4 ContainerID="9b53c2f293930415d1f83b9a9ad820de432aa5b609efa407f1ab19b2bcf0b351" Namespace="calico-system" Pod="goldmane-7c778bb748-rczw4" WorkloadEndpoint="172--239--50--209-k8s-goldmane--7c778bb748--rczw4-eth0" Jan 24 00:50:56.904867 containerd[1593]: 2026-01-24 00:50:56.892 [INFO][4413] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9b53c2f293930415d1f83b9a9ad820de432aa5b609efa407f1ab19b2bcf0b351" Namespace="calico-system" Pod="goldmane-7c778bb748-rczw4" WorkloadEndpoint="172--239--50--209-k8s-goldmane--7c778bb748--rczw4-eth0" Jan 24 00:50:56.904867 containerd[1593]: 2026-01-24 00:50:56.892 [INFO][4413] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9b53c2f293930415d1f83b9a9ad820de432aa5b609efa407f1ab19b2bcf0b351" Namespace="calico-system" Pod="goldmane-7c778bb748-rczw4" WorkloadEndpoint="172--239--50--209-k8s-goldmane--7c778bb748--rczw4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--50--209-k8s-goldmane--7c778bb748--rczw4-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"ed711851-0b36-4791-909e-763bb4210e62", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 50, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-50-209", ContainerID:"9b53c2f293930415d1f83b9a9ad820de432aa5b609efa407f1ab19b2bcf0b351", Pod:"goldmane-7c778bb748-rczw4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.113.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4597d9c16d4", MAC:"4e:b8:dd:6e:1c:1a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:50:56.904867 containerd[1593]: 2026-01-24 00:50:56.900 [INFO][4413] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9b53c2f293930415d1f83b9a9ad820de432aa5b609efa407f1ab19b2bcf0b351" Namespace="calico-system" Pod="goldmane-7c778bb748-rczw4" WorkloadEndpoint="172--239--50--209-k8s-goldmane--7c778bb748--rczw4-eth0" Jan 24 00:50:56.936820 containerd[1593]: time="2026-01-24T00:50:56.936723145Z" level=info msg="connecting to shim 9b53c2f293930415d1f83b9a9ad820de432aa5b609efa407f1ab19b2bcf0b351" address="unix:///run/containerd/s/6527dcd385cec2bd71e24df01db941fe1ef2921c720e11530f8fb4b3e139857a" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:50:56.935000 audit[4463]: NETFILTER_CFG table=filter:129 family=2 entries=58 op=nft_register_chain pid=4463 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 24 00:50:56.935000 audit[4463]: SYSCALL arch=c000003e syscall=46 success=yes exit=30408 a0=3 a1=7ffe7806cf50 a2=0 a3=7ffe7806cf3c items=0 ppid=4097 pid=4463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:56.935000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 24 00:50:56.977012 systemd[1]: Started cri-containerd-9b53c2f293930415d1f83b9a9ad820de432aa5b609efa407f1ab19b2bcf0b351.scope - libcontainer container 9b53c2f293930415d1f83b9a9ad820de432aa5b609efa407f1ab19b2bcf0b351. Jan 24 00:50:56.986581 systemd-networkd[1499]: cali9aa2ef2bc16: Link UP Jan 24 00:50:56.986907 systemd-networkd[1499]: cali9aa2ef2bc16: Gained carrier Jan 24 00:50:57.003707 containerd[1593]: 2026-01-24 00:50:56.793 [INFO][4412] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--50--209-k8s-coredns--66bc5c9577--khdzd-eth0 coredns-66bc5c9577- kube-system b9b2bf92-1ff5-4c12-9c42-3933e6be5da9 818 0 2026-01-24 00:50:24 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-239-50-209 coredns-66bc5c9577-khdzd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9aa2ef2bc16 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="607f3f4d840f56c5e0a115883a5b3c75702b0d3b728871d92359a72046a85507" Namespace="kube-system" Pod="coredns-66bc5c9577-khdzd" WorkloadEndpoint="172--239--50--209-k8s-coredns--66bc5c9577--khdzd-" Jan 24 00:50:57.003707 containerd[1593]: 2026-01-24 00:50:56.794 [INFO][4412] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="607f3f4d840f56c5e0a115883a5b3c75702b0d3b728871d92359a72046a85507" Namespace="kube-system" Pod="coredns-66bc5c9577-khdzd" WorkloadEndpoint="172--239--50--209-k8s-coredns--66bc5c9577--khdzd-eth0" Jan 24 00:50:57.003707 containerd[1593]: 2026-01-24 00:50:56.840 [INFO][4436] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="607f3f4d840f56c5e0a115883a5b3c75702b0d3b728871d92359a72046a85507" HandleID="k8s-pod-network.607f3f4d840f56c5e0a115883a5b3c75702b0d3b728871d92359a72046a85507" Workload="172--239--50--209-k8s-coredns--66bc5c9577--khdzd-eth0" Jan 24 00:50:57.003707 containerd[1593]: 2026-01-24 00:50:56.842 [INFO][4436] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="607f3f4d840f56c5e0a115883a5b3c75702b0d3b728871d92359a72046a85507" HandleID="k8s-pod-network.607f3f4d840f56c5e0a115883a5b3c75702b0d3b728871d92359a72046a85507" Workload="172--239--50--209-k8s-coredns--66bc5c9577--khdzd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad410), Attrs:map[string]string{"namespace":"kube-system", "node":"172-239-50-209", "pod":"coredns-66bc5c9577-khdzd", "timestamp":"2026-01-24 00:50:56.840269642 +0000 UTC"}, Hostname:"172-239-50-209", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:50:57.003707 containerd[1593]: 2026-01-24 00:50:56.842 [INFO][4436] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:50:57.003707 containerd[1593]: 2026-01-24 00:50:56.878 [INFO][4436] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:50:57.003707 containerd[1593]: 2026-01-24 00:50:56.878 [INFO][4436] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-50-209' Jan 24 00:50:57.003707 containerd[1593]: 2026-01-24 00:50:56.950 [INFO][4436] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.607f3f4d840f56c5e0a115883a5b3c75702b0d3b728871d92359a72046a85507" host="172-239-50-209" Jan 24 00:50:57.003707 containerd[1593]: 2026-01-24 00:50:56.955 [INFO][4436] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-50-209" Jan 24 00:50:57.003707 containerd[1593]: 2026-01-24 00:50:56.959 [INFO][4436] ipam/ipam.go 511: Trying affinity for 192.168.113.0/26 host="172-239-50-209" Jan 24 00:50:57.003707 containerd[1593]: 2026-01-24 00:50:56.961 [INFO][4436] ipam/ipam.go 158: Attempting to load block cidr=192.168.113.0/26 host="172-239-50-209" Jan 24 00:50:57.003707 containerd[1593]: 2026-01-24 00:50:56.964 [INFO][4436] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.113.0/26 host="172-239-50-209" Jan 24 00:50:57.003707 containerd[1593]: 2026-01-24 00:50:56.964 [INFO][4436] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.113.0/26 handle="k8s-pod-network.607f3f4d840f56c5e0a115883a5b3c75702b0d3b728871d92359a72046a85507" host="172-239-50-209" Jan 24 00:50:57.003707 containerd[1593]: 2026-01-24 00:50:56.966 [INFO][4436] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.607f3f4d840f56c5e0a115883a5b3c75702b0d3b728871d92359a72046a85507 Jan 24 00:50:57.003707 containerd[1593]: 2026-01-24 00:50:56.971 [INFO][4436] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.113.0/26 handle="k8s-pod-network.607f3f4d840f56c5e0a115883a5b3c75702b0d3b728871d92359a72046a85507" host="172-239-50-209" Jan 24 00:50:57.003707 containerd[1593]: 2026-01-24 00:50:56.979 [INFO][4436] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.113.5/26] block=192.168.113.0/26 handle="k8s-pod-network.607f3f4d840f56c5e0a115883a5b3c75702b0d3b728871d92359a72046a85507" host="172-239-50-209" Jan 24 00:50:57.003707 containerd[1593]: 2026-01-24 00:50:56.979 [INFO][4436] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.113.5/26] handle="k8s-pod-network.607f3f4d840f56c5e0a115883a5b3c75702b0d3b728871d92359a72046a85507" host="172-239-50-209" Jan 24 00:50:57.003707 containerd[1593]: 2026-01-24 00:50:56.979 [INFO][4436] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:50:57.003707 containerd[1593]: 2026-01-24 00:50:56.979 [INFO][4436] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.113.5/26] IPv6=[] ContainerID="607f3f4d840f56c5e0a115883a5b3c75702b0d3b728871d92359a72046a85507" HandleID="k8s-pod-network.607f3f4d840f56c5e0a115883a5b3c75702b0d3b728871d92359a72046a85507" Workload="172--239--50--209-k8s-coredns--66bc5c9577--khdzd-eth0" Jan 24 00:50:57.004231 containerd[1593]: 2026-01-24 00:50:56.982 [INFO][4412] cni-plugin/k8s.go 418: Populated endpoint ContainerID="607f3f4d840f56c5e0a115883a5b3c75702b0d3b728871d92359a72046a85507" Namespace="kube-system" Pod="coredns-66bc5c9577-khdzd" WorkloadEndpoint="172--239--50--209-k8s-coredns--66bc5c9577--khdzd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--50--209-k8s-coredns--66bc5c9577--khdzd-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"b9b2bf92-1ff5-4c12-9c42-3933e6be5da9", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 50, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-50-209", ContainerID:"", Pod:"coredns-66bc5c9577-khdzd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.113.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9aa2ef2bc16", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:50:57.004231 containerd[1593]: 2026-01-24 00:50:56.982 [INFO][4412] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.113.5/32] ContainerID="607f3f4d840f56c5e0a115883a5b3c75702b0d3b728871d92359a72046a85507" Namespace="kube-system" Pod="coredns-66bc5c9577-khdzd" WorkloadEndpoint="172--239--50--209-k8s-coredns--66bc5c9577--khdzd-eth0" Jan 24 00:50:57.004231 containerd[1593]: 2026-01-24 00:50:56.982 [INFO][4412] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9aa2ef2bc16 ContainerID="607f3f4d840f56c5e0a115883a5b3c75702b0d3b728871d92359a72046a85507" Namespace="kube-system" Pod="coredns-66bc5c9577-khdzd" WorkloadEndpoint="172--239--50--209-k8s-coredns--66bc5c9577--khdzd-eth0" Jan 24 00:50:57.004231 containerd[1593]: 2026-01-24 00:50:56.986 [INFO][4412] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="607f3f4d840f56c5e0a115883a5b3c75702b0d3b728871d92359a72046a85507" Namespace="kube-system" Pod="coredns-66bc5c9577-khdzd" WorkloadEndpoint="172--239--50--209-k8s-coredns--66bc5c9577--khdzd-eth0" Jan 24 00:50:57.004231 containerd[1593]: 2026-01-24 00:50:56.987 [INFO][4412] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="607f3f4d840f56c5e0a115883a5b3c75702b0d3b728871d92359a72046a85507" Namespace="kube-system" Pod="coredns-66bc5c9577-khdzd" WorkloadEndpoint="172--239--50--209-k8s-coredns--66bc5c9577--khdzd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--50--209-k8s-coredns--66bc5c9577--khdzd-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"b9b2bf92-1ff5-4c12-9c42-3933e6be5da9", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 50, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-50-209", ContainerID:"607f3f4d840f56c5e0a115883a5b3c75702b0d3b728871d92359a72046a85507", Pod:"coredns-66bc5c9577-khdzd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.113.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9aa2ef2bc16", MAC:"b6:94:f7:77:f8:e3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:50:57.004231 containerd[1593]: 2026-01-24 00:50:57.000 [INFO][4412] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="607f3f4d840f56c5e0a115883a5b3c75702b0d3b728871d92359a72046a85507" Namespace="kube-system" Pod="coredns-66bc5c9577-khdzd" WorkloadEndpoint="172--239--50--209-k8s-coredns--66bc5c9577--khdzd-eth0" Jan 24 00:50:57.023000 audit: BPF prog-id=238 op=LOAD Jan 24 00:50:57.024000 audit: BPF prog-id=239 op=LOAD Jan 24 00:50:57.024000 audit[4480]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4469 pid=4480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:57.024000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962353363326632393339333034313564316638336239613961643832 Jan 24 00:50:57.024000 audit: BPF prog-id=239 op=UNLOAD Jan 24 00:50:57.024000 audit[4480]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4469 pid=4480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:57.024000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962353363326632393339333034313564316638336239613961643832 Jan 24 00:50:57.025000 audit: BPF prog-id=240 op=LOAD Jan 24 00:50:57.025000 audit[4480]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4469 pid=4480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:57.025000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962353363326632393339333034313564316638336239613961643832 Jan 24 00:50:57.025000 audit: BPF prog-id=241 op=LOAD Jan 24 00:50:57.025000 audit[4480]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4469 pid=4480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:57.025000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962353363326632393339333034313564316638336239613961643832 Jan 24 00:50:57.025000 audit: BPF prog-id=241 op=UNLOAD Jan 24 00:50:57.025000 audit[4480]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4469 pid=4480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:57.025000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962353363326632393339333034313564316638336239613961643832 Jan 24 00:50:57.025000 audit: BPF prog-id=240 op=UNLOAD Jan 24 00:50:57.025000 audit[4480]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4469 pid=4480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:57.025000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962353363326632393339333034313564316638336239613961643832 Jan 24 00:50:57.026000 audit: BPF prog-id=242 op=LOAD Jan 24 00:50:57.026000 audit[4480]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4469 pid=4480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:57.026000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962353363326632393339333034313564316638336239613961643832 Jan 24 00:50:57.035134 containerd[1593]: time="2026-01-24T00:50:57.035098959Z" level=info msg="connecting to shim 607f3f4d840f56c5e0a115883a5b3c75702b0d3b728871d92359a72046a85507" address="unix:///run/containerd/s/fb7bedf8dffbe2fdf8abe4c789d7ef3d66aa7662d70f1a2cff7ad53022ed521e" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:50:57.079197 systemd[1]: Started cri-containerd-607f3f4d840f56c5e0a115883a5b3c75702b0d3b728871d92359a72046a85507.scope - libcontainer container 607f3f4d840f56c5e0a115883a5b3c75702b0d3b728871d92359a72046a85507. Jan 24 00:50:57.100000 audit[4550]: NETFILTER_CFG table=filter:130 family=2 entries=40 op=nft_register_chain pid=4550 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 24 00:50:57.100000 audit: BPF prog-id=243 op=LOAD Jan 24 00:50:57.100000 audit[4550]: SYSCALL arch=c000003e syscall=46 success=yes exit=20328 a0=3 a1=7ffef7ac9be0 a2=0 a3=7ffef7ac9bcc items=0 ppid=4097 pid=4550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:57.101000 audit: BPF prog-id=244 op=LOAD Jan 24 00:50:57.101000 audit[4538]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=4519 pid=4538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:57.101000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630376633663464383430663536633565306131313538383361356233 Jan 24 00:50:57.101000 audit: BPF prog-id=244 op=UNLOAD Jan 24 00:50:57.101000 audit[4538]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4519 pid=4538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:57.101000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630376633663464383430663536633565306131313538383361356233 Jan 24 00:50:57.101000 audit: BPF prog-id=245 op=LOAD Jan 24 00:50:57.101000 audit[4538]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=4519 pid=4538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:57.101000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630376633663464383430663536633565306131313538383361356233 Jan 24 00:50:57.101000 audit: BPF prog-id=246 op=LOAD Jan 24 00:50:57.101000 audit[4538]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=4519 pid=4538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:57.101000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630376633663464383430663536633565306131313538383361356233 Jan 24 00:50:57.101000 audit: BPF prog-id=246 op=UNLOAD Jan 24 00:50:57.101000 audit[4538]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4519 pid=4538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:57.101000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630376633663464383430663536633565306131313538383361356233 Jan 24 00:50:57.101000 audit: BPF prog-id=245 op=UNLOAD Jan 24 00:50:57.101000 audit[4538]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4519 pid=4538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:57.101000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630376633663464383430663536633565306131313538383361356233 Jan 24 00:50:57.101000 audit: BPF prog-id=247 op=LOAD Jan 24 00:50:57.101000 audit[4538]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=4519 pid=4538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:57.101000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630376633663464383430663536633565306131313538383361356233 Jan 24 00:50:57.100000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 24 00:50:57.137888 containerd[1593]: time="2026-01-24T00:50:57.137607095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-rczw4,Uid:ed711851-0b36-4791-909e-763bb4210e62,Namespace:calico-system,Attempt:0,} returns sandbox id \"9b53c2f293930415d1f83b9a9ad820de432aa5b609efa407f1ab19b2bcf0b351\"" Jan 24 00:50:57.140403 containerd[1593]: time="2026-01-24T00:50:57.140369247Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:50:57.147954 containerd[1593]: time="2026-01-24T00:50:57.147895794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-khdzd,Uid:b9b2bf92-1ff5-4c12-9c42-3933e6be5da9,Namespace:kube-system,Attempt:0,} returns sandbox id \"607f3f4d840f56c5e0a115883a5b3c75702b0d3b728871d92359a72046a85507\"" Jan 24 00:50:57.148477 kubelet[2807]: E0124 00:50:57.148458 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 24 00:50:57.151825 containerd[1593]: time="2026-01-24T00:50:57.151656658Z" level=info msg="CreateContainer within sandbox \"607f3f4d840f56c5e0a115883a5b3c75702b0d3b728871d92359a72046a85507\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 24 00:50:57.157318 containerd[1593]: time="2026-01-24T00:50:57.157297073Z" level=info msg="Container 993c891a8e9465afdf2a2fed1eb69d34c88b98eea44e3605cee5477f90b4e501: CDI devices from CRI Config.CDIDevices: []" Jan 24 00:50:57.161320 containerd[1593]: time="2026-01-24T00:50:57.161289307Z" level=info msg="CreateContainer within sandbox \"607f3f4d840f56c5e0a115883a5b3c75702b0d3b728871d92359a72046a85507\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"993c891a8e9465afdf2a2fed1eb69d34c88b98eea44e3605cee5477f90b4e501\"" Jan 24 00:50:57.163595 containerd[1593]: time="2026-01-24T00:50:57.161894927Z" level=info msg="StartContainer for \"993c891a8e9465afdf2a2fed1eb69d34c88b98eea44e3605cee5477f90b4e501\"" Jan 24 00:50:57.163595 containerd[1593]: time="2026-01-24T00:50:57.162585428Z" level=info msg="connecting to shim 993c891a8e9465afdf2a2fed1eb69d34c88b98eea44e3605cee5477f90b4e501" address="unix:///run/containerd/s/fb7bedf8dffbe2fdf8abe4c789d7ef3d66aa7662d70f1a2cff7ad53022ed521e" protocol=ttrpc version=3 Jan 24 00:50:57.186954 systemd[1]: Started cri-containerd-993c891a8e9465afdf2a2fed1eb69d34c88b98eea44e3605cee5477f90b4e501.scope - libcontainer container 993c891a8e9465afdf2a2fed1eb69d34c88b98eea44e3605cee5477f90b4e501. Jan 24 00:50:57.201000 audit: BPF prog-id=248 op=LOAD Jan 24 00:50:57.202000 audit: BPF prog-id=249 op=LOAD Jan 24 00:50:57.202000 audit[4569]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4519 pid=4569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:57.202000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939336338393161386539343635616664663261326665643165623639 Jan 24 00:50:57.202000 audit: BPF prog-id=249 op=UNLOAD Jan 24 00:50:57.202000 audit[4569]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4519 pid=4569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:57.202000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939336338393161386539343635616664663261326665643165623639 Jan 24 00:50:57.202000 audit: BPF prog-id=250 op=LOAD Jan 24 00:50:57.202000 audit[4569]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4519 pid=4569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:57.202000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939336338393161386539343635616664663261326665643165623639 Jan 24 00:50:57.202000 audit: BPF prog-id=251 op=LOAD Jan 24 00:50:57.202000 audit[4569]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4519 pid=4569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:57.202000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939336338393161386539343635616664663261326665643165623639 Jan 24 00:50:57.202000 audit: BPF prog-id=251 op=UNLOAD Jan 24 00:50:57.202000 audit[4569]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4519 pid=4569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:57.202000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939336338393161386539343635616664663261326665643165623639 Jan 24 00:50:57.202000 audit: BPF prog-id=250 op=UNLOAD Jan 24 00:50:57.202000 audit[4569]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4519 pid=4569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:57.202000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939336338393161386539343635616664663261326665643165623639 Jan 24 00:50:57.202000 audit: BPF prog-id=252 op=LOAD Jan 24 00:50:57.202000 audit[4569]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4519 pid=4569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:57.202000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939336338393161386539343635616664663261326665643165623639 Jan 24 00:50:57.221996 containerd[1593]: time="2026-01-24T00:50:57.221889843Z" level=info msg="StartContainer for \"993c891a8e9465afdf2a2fed1eb69d34c88b98eea44e3605cee5477f90b4e501\" returns successfully" Jan 24 00:50:57.279750 containerd[1593]: time="2026-01-24T00:50:57.278990576Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:50:57.280749 containerd[1593]: time="2026-01-24T00:50:57.280668428Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:50:57.280790 containerd[1593]: time="2026-01-24T00:50:57.280749728Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 24 00:50:57.281010 kubelet[2807]: E0124 00:50:57.280957 2807 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:50:57.281090 kubelet[2807]: E0124 00:50:57.281012 2807 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:50:57.281193 kubelet[2807]: E0124 00:50:57.281161 2807 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-rczw4_calico-system(ed711851-0b36-4791-909e-763bb4210e62): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:50:57.281266 kubelet[2807]: E0124 00:50:57.281227 2807 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-rczw4" podUID="ed711851-0b36-4791-909e-763bb4210e62" Jan 24 00:50:57.701904 containerd[1593]: time="2026-01-24T00:50:57.701858601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b94bdbd9d-dvkkh,Uid:8e3373a2-955e-4988-8f5b-bfdd560d0dba,Namespace:calico-system,Attempt:0,}" Jan 24 00:50:57.708050 containerd[1593]: time="2026-01-24T00:50:57.708023847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dbqvx,Uid:f3c105a9-c51e-4146-8f7b-cd2b2b3fff5c,Namespace:calico-system,Attempt:0,}" Jan 24 00:50:57.711106 containerd[1593]: time="2026-01-24T00:50:57.711083709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-588dc98d7b-pw4d6,Uid:7db270eb-ad67-4a41-9877-6d27ca50c210,Namespace:calico-apiserver,Attempt:0,}" Jan 24 00:50:57.869888 kubelet[2807]: E0124 00:50:57.869252 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 24 00:50:57.879839 kubelet[2807]: E0124 00:50:57.877907 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 24 00:50:57.882204 kubelet[2807]: E0124 00:50:57.881993 2807 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-rczw4" podUID="ed711851-0b36-4791-909e-763bb4210e62" Jan 24 00:50:57.885865 systemd-networkd[1499]: calibe42bb70c98: Link UP Jan 24 00:50:57.889435 kubelet[2807]: I0124 00:50:57.889402 2807 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-khdzd" podStartSLOduration=33.889391205 podStartE2EDuration="33.889391205s" podCreationTimestamp="2026-01-24 00:50:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:50:57.885325122 +0000 UTC m=+39.286036938" watchObservedRunningTime="2026-01-24 00:50:57.889391205 +0000 UTC m=+39.290103021" Jan 24 00:50:57.889901 systemd-networkd[1499]: calibe42bb70c98: Gained carrier Jan 24 00:50:57.912825 containerd[1593]: 2026-01-24 00:50:57.778 [INFO][4602] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--50--209-k8s-calico--kube--controllers--7b94bdbd9d--dvkkh-eth0 calico-kube-controllers-7b94bdbd9d- calico-system 8e3373a2-955e-4988-8f5b-bfdd560d0dba 805 0 2026-01-24 00:50:36 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7b94bdbd9d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 172-239-50-209 calico-kube-controllers-7b94bdbd9d-dvkkh eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calibe42bb70c98 [] [] }} ContainerID="c2a7a70b6ecd182c240eebc84e480bfdf4ccbbcd59a261ea2957c78bf32dc6cb" Namespace="calico-system" Pod="calico-kube-controllers-7b94bdbd9d-dvkkh" WorkloadEndpoint="172--239--50--209-k8s-calico--kube--controllers--7b94bdbd9d--dvkkh-" Jan 24 00:50:57.912825 containerd[1593]: 2026-01-24 00:50:57.778 [INFO][4602] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c2a7a70b6ecd182c240eebc84e480bfdf4ccbbcd59a261ea2957c78bf32dc6cb" Namespace="calico-system" Pod="calico-kube-controllers-7b94bdbd9d-dvkkh" WorkloadEndpoint="172--239--50--209-k8s-calico--kube--controllers--7b94bdbd9d--dvkkh-eth0" Jan 24 00:50:57.912825 containerd[1593]: 2026-01-24 00:50:57.815 [INFO][4637] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c2a7a70b6ecd182c240eebc84e480bfdf4ccbbcd59a261ea2957c78bf32dc6cb" HandleID="k8s-pod-network.c2a7a70b6ecd182c240eebc84e480bfdf4ccbbcd59a261ea2957c78bf32dc6cb" Workload="172--239--50--209-k8s-calico--kube--controllers--7b94bdbd9d--dvkkh-eth0" Jan 24 00:50:57.912825 containerd[1593]: 2026-01-24 00:50:57.815 [INFO][4637] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c2a7a70b6ecd182c240eebc84e480bfdf4ccbbcd59a261ea2957c78bf32dc6cb" HandleID="k8s-pod-network.c2a7a70b6ecd182c240eebc84e480bfdf4ccbbcd59a261ea2957c78bf32dc6cb" Workload="172--239--50--209-k8s-calico--kube--controllers--7b94bdbd9d--dvkkh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cd6e0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-239-50-209", "pod":"calico-kube-controllers-7b94bdbd9d-dvkkh", "timestamp":"2026-01-24 00:50:57.815028636 +0000 UTC"}, Hostname:"172-239-50-209", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:50:57.912825 containerd[1593]: 2026-01-24 00:50:57.815 [INFO][4637] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:50:57.912825 containerd[1593]: 2026-01-24 00:50:57.815 [INFO][4637] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:50:57.912825 containerd[1593]: 2026-01-24 00:50:57.815 [INFO][4637] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-50-209' Jan 24 00:50:57.912825 containerd[1593]: 2026-01-24 00:50:57.826 [INFO][4637] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c2a7a70b6ecd182c240eebc84e480bfdf4ccbbcd59a261ea2957c78bf32dc6cb" host="172-239-50-209" Jan 24 00:50:57.912825 containerd[1593]: 2026-01-24 00:50:57.833 [INFO][4637] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-50-209" Jan 24 00:50:57.912825 containerd[1593]: 2026-01-24 00:50:57.840 [INFO][4637] ipam/ipam.go 511: Trying affinity for 192.168.113.0/26 host="172-239-50-209" Jan 24 00:50:57.912825 containerd[1593]: 2026-01-24 00:50:57.842 [INFO][4637] ipam/ipam.go 158: Attempting to load block cidr=192.168.113.0/26 host="172-239-50-209" Jan 24 00:50:57.912825 containerd[1593]: 2026-01-24 00:50:57.845 [INFO][4637] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.113.0/26 host="172-239-50-209" Jan 24 00:50:57.912825 containerd[1593]: 2026-01-24 00:50:57.846 [INFO][4637] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.113.0/26 handle="k8s-pod-network.c2a7a70b6ecd182c240eebc84e480bfdf4ccbbcd59a261ea2957c78bf32dc6cb" host="172-239-50-209" Jan 24 00:50:57.912825 containerd[1593]: 2026-01-24 00:50:57.849 [INFO][4637] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c2a7a70b6ecd182c240eebc84e480bfdf4ccbbcd59a261ea2957c78bf32dc6cb Jan 24 00:50:57.912825 containerd[1593]: 2026-01-24 00:50:57.854 [INFO][4637] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.113.0/26 handle="k8s-pod-network.c2a7a70b6ecd182c240eebc84e480bfdf4ccbbcd59a261ea2957c78bf32dc6cb" host="172-239-50-209" Jan 24 00:50:57.912825 containerd[1593]: 2026-01-24 00:50:57.863 [INFO][4637] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.113.6/26] block=192.168.113.0/26 handle="k8s-pod-network.c2a7a70b6ecd182c240eebc84e480bfdf4ccbbcd59a261ea2957c78bf32dc6cb" host="172-239-50-209" Jan 24 00:50:57.912825 containerd[1593]: 2026-01-24 00:50:57.863 [INFO][4637] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.113.6/26] handle="k8s-pod-network.c2a7a70b6ecd182c240eebc84e480bfdf4ccbbcd59a261ea2957c78bf32dc6cb" host="172-239-50-209" Jan 24 00:50:57.912825 containerd[1593]: 2026-01-24 00:50:57.864 [INFO][4637] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:50:57.912825 containerd[1593]: 2026-01-24 00:50:57.864 [INFO][4637] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.113.6/26] IPv6=[] ContainerID="c2a7a70b6ecd182c240eebc84e480bfdf4ccbbcd59a261ea2957c78bf32dc6cb" HandleID="k8s-pod-network.c2a7a70b6ecd182c240eebc84e480bfdf4ccbbcd59a261ea2957c78bf32dc6cb" Workload="172--239--50--209-k8s-calico--kube--controllers--7b94bdbd9d--dvkkh-eth0" Jan 24 00:50:57.913546 containerd[1593]: 2026-01-24 00:50:57.870 [INFO][4602] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c2a7a70b6ecd182c240eebc84e480bfdf4ccbbcd59a261ea2957c78bf32dc6cb" Namespace="calico-system" Pod="calico-kube-controllers-7b94bdbd9d-dvkkh" WorkloadEndpoint="172--239--50--209-k8s-calico--kube--controllers--7b94bdbd9d--dvkkh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--50--209-k8s-calico--kube--controllers--7b94bdbd9d--dvkkh-eth0", GenerateName:"calico-kube-controllers-7b94bdbd9d-", Namespace:"calico-system", SelfLink:"", UID:"8e3373a2-955e-4988-8f5b-bfdd560d0dba", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 50, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b94bdbd9d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-50-209", ContainerID:"", Pod:"calico-kube-controllers-7b94bdbd9d-dvkkh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.113.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibe42bb70c98", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:50:57.913546 containerd[1593]: 2026-01-24 00:50:57.870 [INFO][4602] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.113.6/32] ContainerID="c2a7a70b6ecd182c240eebc84e480bfdf4ccbbcd59a261ea2957c78bf32dc6cb" Namespace="calico-system" Pod="calico-kube-controllers-7b94bdbd9d-dvkkh" WorkloadEndpoint="172--239--50--209-k8s-calico--kube--controllers--7b94bdbd9d--dvkkh-eth0" Jan 24 00:50:57.913546 containerd[1593]: 2026-01-24 00:50:57.870 [INFO][4602] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibe42bb70c98 ContainerID="c2a7a70b6ecd182c240eebc84e480bfdf4ccbbcd59a261ea2957c78bf32dc6cb" Namespace="calico-system" Pod="calico-kube-controllers-7b94bdbd9d-dvkkh" WorkloadEndpoint="172--239--50--209-k8s-calico--kube--controllers--7b94bdbd9d--dvkkh-eth0" Jan 24 00:50:57.913546 containerd[1593]: 2026-01-24 00:50:57.888 [INFO][4602] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c2a7a70b6ecd182c240eebc84e480bfdf4ccbbcd59a261ea2957c78bf32dc6cb" Namespace="calico-system" Pod="calico-kube-controllers-7b94bdbd9d-dvkkh" WorkloadEndpoint="172--239--50--209-k8s-calico--kube--controllers--7b94bdbd9d--dvkkh-eth0" Jan 24 00:50:57.913546 containerd[1593]: 2026-01-24 00:50:57.888 [INFO][4602] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c2a7a70b6ecd182c240eebc84e480bfdf4ccbbcd59a261ea2957c78bf32dc6cb" Namespace="calico-system" Pod="calico-kube-controllers-7b94bdbd9d-dvkkh" WorkloadEndpoint="172--239--50--209-k8s-calico--kube--controllers--7b94bdbd9d--dvkkh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--50--209-k8s-calico--kube--controllers--7b94bdbd9d--dvkkh-eth0", GenerateName:"calico-kube-controllers-7b94bdbd9d-", Namespace:"calico-system", SelfLink:"", UID:"8e3373a2-955e-4988-8f5b-bfdd560d0dba", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 50, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b94bdbd9d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-50-209", ContainerID:"c2a7a70b6ecd182c240eebc84e480bfdf4ccbbcd59a261ea2957c78bf32dc6cb", Pod:"calico-kube-controllers-7b94bdbd9d-dvkkh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.113.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibe42bb70c98", MAC:"b6:a8:96:27:17:83", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:50:57.913546 containerd[1593]: 2026-01-24 00:50:57.908 [INFO][4602] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c2a7a70b6ecd182c240eebc84e480bfdf4ccbbcd59a261ea2957c78bf32dc6cb" Namespace="calico-system" Pod="calico-kube-controllers-7b94bdbd9d-dvkkh" WorkloadEndpoint="172--239--50--209-k8s-calico--kube--controllers--7b94bdbd9d--dvkkh-eth0" Jan 24 00:50:57.952224 containerd[1593]: time="2026-01-24T00:50:57.952049005Z" level=info msg="connecting to shim c2a7a70b6ecd182c240eebc84e480bfdf4ccbbcd59a261ea2957c78bf32dc6cb" address="unix:///run/containerd/s/934ba5dbeaba7ab1ccb54d686cb00996691ab9e126f62160287f32f80581b561" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:50:57.969000 audit[4679]: NETFILTER_CFG table=filter:131 family=2 entries=54 op=nft_register_chain pid=4679 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 24 00:50:57.969000 audit[4679]: SYSCALL arch=c000003e syscall=46 success=yes exit=25976 a0=3 a1=7fff3c6ed5f0 a2=0 a3=7fff3c6ed5dc items=0 ppid=4097 pid=4679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:57.969000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 24 00:50:57.977000 audit[4683]: NETFILTER_CFG table=filter:132 family=2 entries=14 op=nft_register_rule pid=4683 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:50:57.977000 audit[4683]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe28b2dd20 a2=0 a3=7ffe28b2dd0c items=0 ppid=2920 pid=4683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:57.977000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:50:57.993905 systemd[1]: Started cri-containerd-c2a7a70b6ecd182c240eebc84e480bfdf4ccbbcd59a261ea2957c78bf32dc6cb.scope - libcontainer container c2a7a70b6ecd182c240eebc84e480bfdf4ccbbcd59a261ea2957c78bf32dc6cb. Jan 24 00:50:57.991000 audit[4683]: NETFILTER_CFG table=nat:133 family=2 entries=44 op=nft_register_rule pid=4683 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:50:58.003848 systemd-networkd[1499]: caliadda9970295: Link UP Jan 24 00:50:58.005082 systemd-networkd[1499]: caliadda9970295: Gained carrier Jan 24 00:50:57.991000 audit[4683]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffe28b2dd20 a2=0 a3=7ffe28b2dd0c items=0 ppid=2920 pid=4683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:57.991000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:50:58.041842 containerd[1593]: 2026-01-24 00:50:57.803 [INFO][4603] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--50--209-k8s-calico--apiserver--588dc98d7b--pw4d6-eth0 calico-apiserver-588dc98d7b- calico-apiserver 7db270eb-ad67-4a41-9877-6d27ca50c210 817 0 2026-01-24 00:50:32 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:588dc98d7b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-239-50-209 calico-apiserver-588dc98d7b-pw4d6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliadda9970295 [] [] }} ContainerID="d7e8b3dfb416612579c15f5203534c74c3a7a1c4274b60abaeee0f0156dde380" Namespace="calico-apiserver" Pod="calico-apiserver-588dc98d7b-pw4d6" WorkloadEndpoint="172--239--50--209-k8s-calico--apiserver--588dc98d7b--pw4d6-" Jan 24 00:50:58.041842 containerd[1593]: 2026-01-24 00:50:57.803 [INFO][4603] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d7e8b3dfb416612579c15f5203534c74c3a7a1c4274b60abaeee0f0156dde380" Namespace="calico-apiserver" Pod="calico-apiserver-588dc98d7b-pw4d6" WorkloadEndpoint="172--239--50--209-k8s-calico--apiserver--588dc98d7b--pw4d6-eth0" Jan 24 00:50:58.041842 containerd[1593]: 2026-01-24 00:50:57.844 [INFO][4643] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d7e8b3dfb416612579c15f5203534c74c3a7a1c4274b60abaeee0f0156dde380" HandleID="k8s-pod-network.d7e8b3dfb416612579c15f5203534c74c3a7a1c4274b60abaeee0f0156dde380" Workload="172--239--50--209-k8s-calico--apiserver--588dc98d7b--pw4d6-eth0" Jan 24 00:50:58.041842 containerd[1593]: 2026-01-24 00:50:57.844 [INFO][4643] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d7e8b3dfb416612579c15f5203534c74c3a7a1c4274b60abaeee0f0156dde380" HandleID="k8s-pod-network.d7e8b3dfb416612579c15f5203534c74c3a7a1c4274b60abaeee0f0156dde380" Workload="172--239--50--209-k8s-calico--apiserver--588dc98d7b--pw4d6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002bd030), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-239-50-209", "pod":"calico-apiserver-588dc98d7b-pw4d6", "timestamp":"2026-01-24 00:50:57.844154834 +0000 UTC"}, Hostname:"172-239-50-209", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:50:58.041842 containerd[1593]: 2026-01-24 00:50:57.844 [INFO][4643] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:50:58.041842 containerd[1593]: 2026-01-24 00:50:57.863 [INFO][4643] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:50:58.041842 containerd[1593]: 2026-01-24 00:50:57.863 [INFO][4643] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-50-209' Jan 24 00:50:58.041842 containerd[1593]: 2026-01-24 00:50:57.942 [INFO][4643] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d7e8b3dfb416612579c15f5203534c74c3a7a1c4274b60abaeee0f0156dde380" host="172-239-50-209" Jan 24 00:50:58.041842 containerd[1593]: 2026-01-24 00:50:57.959 [INFO][4643] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-50-209" Jan 24 00:50:58.041842 containerd[1593]: 2026-01-24 00:50:57.966 [INFO][4643] ipam/ipam.go 511: Trying affinity for 192.168.113.0/26 host="172-239-50-209" Jan 24 00:50:58.041842 containerd[1593]: 2026-01-24 00:50:57.969 [INFO][4643] ipam/ipam.go 158: Attempting to load block cidr=192.168.113.0/26 host="172-239-50-209" Jan 24 00:50:58.041842 containerd[1593]: 2026-01-24 00:50:57.975 [INFO][4643] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.113.0/26 host="172-239-50-209" Jan 24 00:50:58.041842 containerd[1593]: 2026-01-24 00:50:57.975 [INFO][4643] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.113.0/26 handle="k8s-pod-network.d7e8b3dfb416612579c15f5203534c74c3a7a1c4274b60abaeee0f0156dde380" host="172-239-50-209" Jan 24 00:50:58.041842 containerd[1593]: 2026-01-24 00:50:57.977 [INFO][4643] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d7e8b3dfb416612579c15f5203534c74c3a7a1c4274b60abaeee0f0156dde380 Jan 24 00:50:58.041842 containerd[1593]: 2026-01-24 00:50:57.982 [INFO][4643] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.113.0/26 handle="k8s-pod-network.d7e8b3dfb416612579c15f5203534c74c3a7a1c4274b60abaeee0f0156dde380" host="172-239-50-209" Jan 24 00:50:58.041842 containerd[1593]: 2026-01-24 00:50:57.988 [INFO][4643] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.113.7/26] block=192.168.113.0/26 handle="k8s-pod-network.d7e8b3dfb416612579c15f5203534c74c3a7a1c4274b60abaeee0f0156dde380" host="172-239-50-209" Jan 24 00:50:58.041842 containerd[1593]: 2026-01-24 00:50:57.988 [INFO][4643] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.113.7/26] handle="k8s-pod-network.d7e8b3dfb416612579c15f5203534c74c3a7a1c4274b60abaeee0f0156dde380" host="172-239-50-209" Jan 24 00:50:58.041842 containerd[1593]: 2026-01-24 00:50:57.993 [INFO][4643] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:50:58.041842 containerd[1593]: 2026-01-24 00:50:57.995 [INFO][4643] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.113.7/26] IPv6=[] ContainerID="d7e8b3dfb416612579c15f5203534c74c3a7a1c4274b60abaeee0f0156dde380" HandleID="k8s-pod-network.d7e8b3dfb416612579c15f5203534c74c3a7a1c4274b60abaeee0f0156dde380" Workload="172--239--50--209-k8s-calico--apiserver--588dc98d7b--pw4d6-eth0" Jan 24 00:50:58.043160 containerd[1593]: 2026-01-24 00:50:58.000 [INFO][4603] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d7e8b3dfb416612579c15f5203534c74c3a7a1c4274b60abaeee0f0156dde380" Namespace="calico-apiserver" Pod="calico-apiserver-588dc98d7b-pw4d6" WorkloadEndpoint="172--239--50--209-k8s-calico--apiserver--588dc98d7b--pw4d6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--50--209-k8s-calico--apiserver--588dc98d7b--pw4d6-eth0", GenerateName:"calico-apiserver-588dc98d7b-", Namespace:"calico-apiserver", SelfLink:"", UID:"7db270eb-ad67-4a41-9877-6d27ca50c210", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 50, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"588dc98d7b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-50-209", ContainerID:"", Pod:"calico-apiserver-588dc98d7b-pw4d6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.113.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliadda9970295", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:50:58.043160 containerd[1593]: 2026-01-24 00:50:58.001 [INFO][4603] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.113.7/32] ContainerID="d7e8b3dfb416612579c15f5203534c74c3a7a1c4274b60abaeee0f0156dde380" Namespace="calico-apiserver" Pod="calico-apiserver-588dc98d7b-pw4d6" WorkloadEndpoint="172--239--50--209-k8s-calico--apiserver--588dc98d7b--pw4d6-eth0" Jan 24 00:50:58.043160 containerd[1593]: 2026-01-24 00:50:58.001 [INFO][4603] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliadda9970295 ContainerID="d7e8b3dfb416612579c15f5203534c74c3a7a1c4274b60abaeee0f0156dde380" Namespace="calico-apiserver" Pod="calico-apiserver-588dc98d7b-pw4d6" WorkloadEndpoint="172--239--50--209-k8s-calico--apiserver--588dc98d7b--pw4d6-eth0" Jan 24 00:50:58.043160 containerd[1593]: 2026-01-24 00:50:58.005 [INFO][4603] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d7e8b3dfb416612579c15f5203534c74c3a7a1c4274b60abaeee0f0156dde380" Namespace="calico-apiserver" Pod="calico-apiserver-588dc98d7b-pw4d6" WorkloadEndpoint="172--239--50--209-k8s-calico--apiserver--588dc98d7b--pw4d6-eth0" Jan 24 00:50:58.043160 containerd[1593]: 2026-01-24 00:50:58.006 [INFO][4603] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d7e8b3dfb416612579c15f5203534c74c3a7a1c4274b60abaeee0f0156dde380" Namespace="calico-apiserver" Pod="calico-apiserver-588dc98d7b-pw4d6" WorkloadEndpoint="172--239--50--209-k8s-calico--apiserver--588dc98d7b--pw4d6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--50--209-k8s-calico--apiserver--588dc98d7b--pw4d6-eth0", GenerateName:"calico-apiserver-588dc98d7b-", Namespace:"calico-apiserver", SelfLink:"", UID:"7db270eb-ad67-4a41-9877-6d27ca50c210", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 50, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"588dc98d7b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-50-209", ContainerID:"d7e8b3dfb416612579c15f5203534c74c3a7a1c4274b60abaeee0f0156dde380", Pod:"calico-apiserver-588dc98d7b-pw4d6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.113.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliadda9970295", MAC:"d6:46:a1:ee:49:31", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:50:58.043160 containerd[1593]: 2026-01-24 00:50:58.037 [INFO][4603] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d7e8b3dfb416612579c15f5203534c74c3a7a1c4274b60abaeee0f0156dde380" Namespace="calico-apiserver" Pod="calico-apiserver-588dc98d7b-pw4d6" WorkloadEndpoint="172--239--50--209-k8s-calico--apiserver--588dc98d7b--pw4d6-eth0" Jan 24 00:50:58.064000 audit: BPF prog-id=253 op=LOAD Jan 24 00:50:58.065000 audit: BPF prog-id=254 op=LOAD Jan 24 00:50:58.065000 audit[4689]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4677 pid=4689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:58.065000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332613761373062366563643138326332343065656263383465343830 Jan 24 00:50:58.065000 audit: BPF prog-id=254 op=UNLOAD Jan 24 00:50:58.065000 audit[4689]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4677 pid=4689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:58.065000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332613761373062366563643138326332343065656263383465343830 Jan 24 00:50:58.065000 audit: BPF prog-id=255 op=LOAD Jan 24 00:50:58.065000 audit[4689]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4677 pid=4689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:58.065000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332613761373062366563643138326332343065656263383465343830 Jan 24 00:50:58.065000 audit: BPF prog-id=256 op=LOAD Jan 24 00:50:58.065000 audit[4689]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4677 pid=4689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:58.065000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332613761373062366563643138326332343065656263383465343830 Jan 24 00:50:58.065000 audit: BPF prog-id=256 op=UNLOAD Jan 24 00:50:58.065000 audit[4689]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4677 pid=4689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:58.065000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332613761373062366563643138326332343065656263383465343830 Jan 24 00:50:58.065000 audit: BPF prog-id=255 op=UNLOAD Jan 24 00:50:58.065000 audit[4689]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4677 pid=4689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:58.065000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332613761373062366563643138326332343065656263383465343830 Jan 24 00:50:58.065000 audit: BPF prog-id=257 op=LOAD Jan 24 00:50:58.065000 audit[4689]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4677 pid=4689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:58.065000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332613761373062366563643138326332343065656263383465343830 Jan 24 00:50:58.071752 containerd[1593]: time="2026-01-24T00:50:58.071616504Z" level=info msg="connecting to shim d7e8b3dfb416612579c15f5203534c74c3a7a1c4274b60abaeee0f0156dde380" address="unix:///run/containerd/s/fe8c93ab79eda58398c9f92084fb3fa5b592da7140da03d31b57b0122d8905f5" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:50:58.120220 systemd[1]: Started cri-containerd-d7e8b3dfb416612579c15f5203534c74c3a7a1c4274b60abaeee0f0156dde380.scope - libcontainer container d7e8b3dfb416612579c15f5203534c74c3a7a1c4274b60abaeee0f0156dde380. Jan 24 00:50:58.121000 audit[4746]: NETFILTER_CFG table=filter:134 family=2 entries=49 op=nft_register_chain pid=4746 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 24 00:50:58.121000 audit[4746]: SYSCALL arch=c000003e syscall=46 success=yes exit=25420 a0=3 a1=7ffe01406330 a2=0 a3=7ffe0140631c items=0 ppid=4097 pid=4746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:58.121000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 24 00:50:58.144279 containerd[1593]: time="2026-01-24T00:50:58.144236760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b94bdbd9d-dvkkh,Uid:8e3373a2-955e-4988-8f5b-bfdd560d0dba,Namespace:calico-system,Attempt:0,} returns sandbox id \"c2a7a70b6ecd182c240eebc84e480bfdf4ccbbcd59a261ea2957c78bf32dc6cb\"" Jan 24 00:50:58.148352 containerd[1593]: time="2026-01-24T00:50:58.148330704Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:50:58.165000 audit: BPF prog-id=258 op=LOAD Jan 24 00:50:58.166000 audit: BPF prog-id=259 op=LOAD Jan 24 00:50:58.166000 audit[4735]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001fa238 a2=98 a3=0 items=0 ppid=4722 pid=4735 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:58.166000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437653862336466623431363631323537396331356635323033353334 Jan 24 00:50:58.166000 audit: BPF prog-id=259 op=UNLOAD Jan 24 00:50:58.166000 audit[4735]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4722 pid=4735 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:58.166000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437653862336466623431363631323537396331356635323033353334 Jan 24 00:50:58.166000 audit: BPF prog-id=260 op=LOAD Jan 24 00:50:58.166000 audit[4735]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001fa488 a2=98 a3=0 items=0 ppid=4722 pid=4735 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:58.166000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437653862336466623431363631323537396331356635323033353334 Jan 24 00:50:58.166000 audit: BPF prog-id=261 op=LOAD Jan 24 00:50:58.166000 audit[4735]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001fa218 a2=98 a3=0 items=0 ppid=4722 pid=4735 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:58.166000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437653862336466623431363631323537396331356635323033353334 Jan 24 00:50:58.166000 audit: BPF prog-id=261 op=UNLOAD Jan 24 00:50:58.166000 audit[4735]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4722 pid=4735 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:58.166000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437653862336466623431363631323537396331356635323033353334 Jan 24 00:50:58.166000 audit: BPF prog-id=260 op=UNLOAD Jan 24 00:50:58.166000 audit[4735]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4722 pid=4735 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:58.166000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437653862336466623431363631323537396331356635323033353334 Jan 24 00:50:58.166000 audit: BPF prog-id=262 op=LOAD Jan 24 00:50:58.166000 audit[4735]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001fa6e8 a2=98 a3=0 items=0 ppid=4722 pid=4735 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:58.166000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437653862336466623431363631323537396331356635323033353334 Jan 24 00:50:58.177718 systemd-networkd[1499]: cali4eacc1c94d5: Link UP Jan 24 00:50:58.177979 systemd-networkd[1499]: cali4eacc1c94d5: Gained carrier Jan 24 00:50:58.218908 containerd[1593]: 2026-01-24 00:50:57.833 [INFO][4623] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--50--209-k8s-csi--node--driver--dbqvx-eth0 csi-node-driver- calico-system f3c105a9-c51e-4146-8f7b-cd2b2b3fff5c 714 0 2026-01-24 00:50:36 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172-239-50-209 csi-node-driver-dbqvx eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali4eacc1c94d5 [] [] }} ContainerID="b23d7edcbb0c616b61572b37a16de7a24145de670a0bb51a802d963567089778" Namespace="calico-system" Pod="csi-node-driver-dbqvx" WorkloadEndpoint="172--239--50--209-k8s-csi--node--driver--dbqvx-" Jan 24 00:50:58.218908 containerd[1593]: 2026-01-24 00:50:57.833 [INFO][4623] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b23d7edcbb0c616b61572b37a16de7a24145de670a0bb51a802d963567089778" Namespace="calico-system" Pod="csi-node-driver-dbqvx" WorkloadEndpoint="172--239--50--209-k8s-csi--node--driver--dbqvx-eth0" Jan 24 00:50:58.218908 containerd[1593]: 2026-01-24 00:50:57.906 [INFO][4652] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b23d7edcbb0c616b61572b37a16de7a24145de670a0bb51a802d963567089778" HandleID="k8s-pod-network.b23d7edcbb0c616b61572b37a16de7a24145de670a0bb51a802d963567089778" Workload="172--239--50--209-k8s-csi--node--driver--dbqvx-eth0" Jan 24 00:50:58.218908 containerd[1593]: 2026-01-24 00:50:57.906 [INFO][4652] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b23d7edcbb0c616b61572b37a16de7a24145de670a0bb51a802d963567089778" HandleID="k8s-pod-network.b23d7edcbb0c616b61572b37a16de7a24145de670a0bb51a802d963567089778" Workload="172--239--50--209-k8s-csi--node--driver--dbqvx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5030), Attrs:map[string]string{"namespace":"calico-system", "node":"172-239-50-209", "pod":"csi-node-driver-dbqvx", "timestamp":"2026-01-24 00:50:57.906097532 +0000 UTC"}, Hostname:"172-239-50-209", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:50:58.218908 containerd[1593]: 2026-01-24 00:50:57.906 [INFO][4652] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:50:58.218908 containerd[1593]: 2026-01-24 00:50:57.991 [INFO][4652] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:50:58.218908 containerd[1593]: 2026-01-24 00:50:57.991 [INFO][4652] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-50-209' Jan 24 00:50:58.218908 containerd[1593]: 2026-01-24 00:50:58.029 [INFO][4652] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b23d7edcbb0c616b61572b37a16de7a24145de670a0bb51a802d963567089778" host="172-239-50-209" Jan 24 00:50:58.218908 containerd[1593]: 2026-01-24 00:50:58.071 [INFO][4652] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-50-209" Jan 24 00:50:58.218908 containerd[1593]: 2026-01-24 00:50:58.093 [INFO][4652] ipam/ipam.go 511: Trying affinity for 192.168.113.0/26 host="172-239-50-209" Jan 24 00:50:58.218908 containerd[1593]: 2026-01-24 00:50:58.109 [INFO][4652] ipam/ipam.go 158: Attempting to load block cidr=192.168.113.0/26 host="172-239-50-209" Jan 24 00:50:58.218908 containerd[1593]: 2026-01-24 00:50:58.116 [INFO][4652] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.113.0/26 host="172-239-50-209" Jan 24 00:50:58.218908 containerd[1593]: 2026-01-24 00:50:58.116 [INFO][4652] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.113.0/26 handle="k8s-pod-network.b23d7edcbb0c616b61572b37a16de7a24145de670a0bb51a802d963567089778" host="172-239-50-209" Jan 24 00:50:58.218908 containerd[1593]: 2026-01-24 00:50:58.122 [INFO][4652] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b23d7edcbb0c616b61572b37a16de7a24145de670a0bb51a802d963567089778 Jan 24 00:50:58.218908 containerd[1593]: 2026-01-24 00:50:58.139 [INFO][4652] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.113.0/26 handle="k8s-pod-network.b23d7edcbb0c616b61572b37a16de7a24145de670a0bb51a802d963567089778" host="172-239-50-209" Jan 24 00:50:58.218908 containerd[1593]: 2026-01-24 00:50:58.159 [INFO][4652] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.113.8/26] block=192.168.113.0/26 handle="k8s-pod-network.b23d7edcbb0c616b61572b37a16de7a24145de670a0bb51a802d963567089778" host="172-239-50-209" Jan 24 00:50:58.218908 containerd[1593]: 2026-01-24 00:50:58.159 [INFO][4652] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.113.8/26] handle="k8s-pod-network.b23d7edcbb0c616b61572b37a16de7a24145de670a0bb51a802d963567089778" host="172-239-50-209" Jan 24 00:50:58.218908 containerd[1593]: 2026-01-24 00:50:58.159 [INFO][4652] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:50:58.218908 containerd[1593]: 2026-01-24 00:50:58.159 [INFO][4652] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.113.8/26] IPv6=[] ContainerID="b23d7edcbb0c616b61572b37a16de7a24145de670a0bb51a802d963567089778" HandleID="k8s-pod-network.b23d7edcbb0c616b61572b37a16de7a24145de670a0bb51a802d963567089778" Workload="172--239--50--209-k8s-csi--node--driver--dbqvx-eth0" Jan 24 00:50:58.220661 containerd[1593]: 2026-01-24 00:50:58.169 [INFO][4623] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b23d7edcbb0c616b61572b37a16de7a24145de670a0bb51a802d963567089778" Namespace="calico-system" Pod="csi-node-driver-dbqvx" WorkloadEndpoint="172--239--50--209-k8s-csi--node--driver--dbqvx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--50--209-k8s-csi--node--driver--dbqvx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f3c105a9-c51e-4146-8f7b-cd2b2b3fff5c", ResourceVersion:"714", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 50, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-50-209", ContainerID:"", Pod:"csi-node-driver-dbqvx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.113.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4eacc1c94d5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:50:58.220661 containerd[1593]: 2026-01-24 00:50:58.170 [INFO][4623] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.113.8/32] ContainerID="b23d7edcbb0c616b61572b37a16de7a24145de670a0bb51a802d963567089778" Namespace="calico-system" Pod="csi-node-driver-dbqvx" WorkloadEndpoint="172--239--50--209-k8s-csi--node--driver--dbqvx-eth0" Jan 24 00:50:58.220661 containerd[1593]: 2026-01-24 00:50:58.170 [INFO][4623] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4eacc1c94d5 ContainerID="b23d7edcbb0c616b61572b37a16de7a24145de670a0bb51a802d963567089778" Namespace="calico-system" Pod="csi-node-driver-dbqvx" WorkloadEndpoint="172--239--50--209-k8s-csi--node--driver--dbqvx-eth0" Jan 24 00:50:58.220661 containerd[1593]: 2026-01-24 00:50:58.182 [INFO][4623] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b23d7edcbb0c616b61572b37a16de7a24145de670a0bb51a802d963567089778" Namespace="calico-system" Pod="csi-node-driver-dbqvx" WorkloadEndpoint="172--239--50--209-k8s-csi--node--driver--dbqvx-eth0" Jan 24 00:50:58.220661 containerd[1593]: 2026-01-24 00:50:58.188 [INFO][4623] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b23d7edcbb0c616b61572b37a16de7a24145de670a0bb51a802d963567089778" Namespace="calico-system" Pod="csi-node-driver-dbqvx" WorkloadEndpoint="172--239--50--209-k8s-csi--node--driver--dbqvx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--50--209-k8s-csi--node--driver--dbqvx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f3c105a9-c51e-4146-8f7b-cd2b2b3fff5c", ResourceVersion:"714", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 50, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-50-209", ContainerID:"b23d7edcbb0c616b61572b37a16de7a24145de670a0bb51a802d963567089778", Pod:"csi-node-driver-dbqvx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.113.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4eacc1c94d5", MAC:"6e:2e:fb:c4:f1:09", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:50:58.220661 containerd[1593]: 2026-01-24 00:50:58.209 [INFO][4623] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b23d7edcbb0c616b61572b37a16de7a24145de670a0bb51a802d963567089778" Namespace="calico-system" Pod="csi-node-driver-dbqvx" WorkloadEndpoint="172--239--50--209-k8s-csi--node--driver--dbqvx-eth0" Jan 24 00:50:58.230237 containerd[1593]: time="2026-01-24T00:50:58.230192998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-588dc98d7b-pw4d6,Uid:7db270eb-ad67-4a41-9877-6d27ca50c210,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"d7e8b3dfb416612579c15f5203534c74c3a7a1c4274b60abaeee0f0156dde380\"" Jan 24 00:50:58.253331 containerd[1593]: time="2026-01-24T00:50:58.253291438Z" level=info msg="connecting to shim b23d7edcbb0c616b61572b37a16de7a24145de670a0bb51a802d963567089778" address="unix:///run/containerd/s/1e72824decc1aa5fd27ad3a258050717d92a04adb37c051359706cf9e9f3aa5f" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:50:58.274338 containerd[1593]: time="2026-01-24T00:50:58.274281628Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:50:58.276785 containerd[1593]: time="2026-01-24T00:50:58.276741820Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:50:58.277197 containerd[1593]: time="2026-01-24T00:50:58.276937380Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 24 00:50:58.277524 kubelet[2807]: E0124 00:50:58.277486 2807 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:50:58.277591 kubelet[2807]: E0124 00:50:58.277527 2807 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:50:58.277669 kubelet[2807]: E0124 00:50:58.277643 2807 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-7b94bdbd9d-dvkkh_calico-system(8e3373a2-955e-4988-8f5b-bfdd560d0dba): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:50:58.277730 kubelet[2807]: E0124 00:50:58.277680 2807 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b94bdbd9d-dvkkh" podUID="8e3373a2-955e-4988-8f5b-bfdd560d0dba" Jan 24 00:50:58.278083 containerd[1593]: time="2026-01-24T00:50:58.278049521Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:50:58.280977 systemd[1]: Started cri-containerd-b23d7edcbb0c616b61572b37a16de7a24145de670a0bb51a802d963567089778.scope - libcontainer container b23d7edcbb0c616b61572b37a16de7a24145de670a0bb51a802d963567089778. Jan 24 00:50:58.282000 audit[4809]: NETFILTER_CFG table=filter:135 family=2 entries=40 op=nft_register_chain pid=4809 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 24 00:50:58.282000 audit[4809]: SYSCALL arch=c000003e syscall=46 success=yes exit=20784 a0=3 a1=7fff7f0d99a0 a2=0 a3=7fff7f0d998c items=0 ppid=4097 pid=4809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:58.282000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 24 00:50:58.294000 audit: BPF prog-id=263 op=LOAD Jan 24 00:50:58.295000 audit: BPF prog-id=264 op=LOAD Jan 24 00:50:58.295000 audit[4796]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=4785 pid=4796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:58.295000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232336437656463626230633631366236313537326233376131366465 Jan 24 00:50:58.295000 audit: BPF prog-id=264 op=UNLOAD Jan 24 00:50:58.295000 audit[4796]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4785 pid=4796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:58.295000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232336437656463626230633631366236313537326233376131366465 Jan 24 00:50:58.295000 audit: BPF prog-id=265 op=LOAD Jan 24 00:50:58.295000 audit[4796]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=4785 pid=4796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:58.295000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232336437656463626230633631366236313537326233376131366465 Jan 24 00:50:58.295000 audit: BPF prog-id=266 op=LOAD Jan 24 00:50:58.295000 audit[4796]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=4785 pid=4796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:58.295000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232336437656463626230633631366236313537326233376131366465 Jan 24 00:50:58.295000 audit: BPF prog-id=266 op=UNLOAD Jan 24 00:50:58.295000 audit[4796]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4785 pid=4796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:58.295000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232336437656463626230633631366236313537326233376131366465 Jan 24 00:50:58.295000 audit: BPF prog-id=265 op=UNLOAD Jan 24 00:50:58.295000 audit[4796]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4785 pid=4796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:58.295000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232336437656463626230633631366236313537326233376131366465 Jan 24 00:50:58.295000 audit: BPF prog-id=267 op=LOAD Jan 24 00:50:58.295000 audit[4796]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=4785 pid=4796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:58.295000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232336437656463626230633631366236313537326233376131366465 Jan 24 00:50:58.315073 containerd[1593]: time="2026-01-24T00:50:58.315044854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dbqvx,Uid:f3c105a9-c51e-4146-8f7b-cd2b2b3fff5c,Namespace:calico-system,Attempt:0,} returns sandbox id \"b23d7edcbb0c616b61572b37a16de7a24145de670a0bb51a802d963567089778\"" Jan 24 00:50:58.318169 systemd-networkd[1499]: cali4597d9c16d4: Gained IPv6LL Jan 24 00:50:58.335373 kubelet[2807]: I0124 00:50:58.335346 2807 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 00:50:58.335658 kubelet[2807]: E0124 00:50:58.335640 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 24 00:50:58.415283 containerd[1593]: time="2026-01-24T00:50:58.415236185Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:50:58.416096 containerd[1593]: time="2026-01-24T00:50:58.416066576Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:50:58.416159 containerd[1593]: time="2026-01-24T00:50:58.416134506Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 24 00:50:58.416605 kubelet[2807]: E0124 00:50:58.416557 2807 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:50:58.416605 kubelet[2807]: E0124 00:50:58.416597 2807 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:50:58.416814 kubelet[2807]: E0124 00:50:58.416752 2807 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-588dc98d7b-pw4d6_calico-apiserver(7db270eb-ad67-4a41-9877-6d27ca50c210): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:50:58.417116 kubelet[2807]: E0124 00:50:58.416789 2807 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-588dc98d7b-pw4d6" podUID="7db270eb-ad67-4a41-9877-6d27ca50c210" Jan 24 00:50:58.417262 containerd[1593]: time="2026-01-24T00:50:58.417232197Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:50:58.444956 systemd-networkd[1499]: cali9aa2ef2bc16: Gained IPv6LL Jan 24 00:50:58.545857 containerd[1593]: time="2026-01-24T00:50:58.544893123Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:50:58.546221 containerd[1593]: time="2026-01-24T00:50:58.546177824Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:50:58.546221 containerd[1593]: time="2026-01-24T00:50:58.546233394Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 24 00:50:58.546469 kubelet[2807]: E0124 00:50:58.546338 2807 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:50:58.546469 kubelet[2807]: E0124 00:50:58.546381 2807 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:50:58.546469 kubelet[2807]: E0124 00:50:58.546435 2807 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-dbqvx_calico-system(f3c105a9-c51e-4146-8f7b-cd2b2b3fff5c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:50:58.548312 containerd[1593]: time="2026-01-24T00:50:58.548241636Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:50:58.675982 containerd[1593]: time="2026-01-24T00:50:58.675925571Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:50:58.676841 containerd[1593]: time="2026-01-24T00:50:58.676792822Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:50:58.676957 containerd[1593]: time="2026-01-24T00:50:58.676829652Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 24 00:50:58.677103 kubelet[2807]: E0124 00:50:58.677059 2807 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:50:58.677103 kubelet[2807]: E0124 00:50:58.677102 2807 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:50:58.677264 kubelet[2807]: E0124 00:50:58.677158 2807 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-dbqvx_calico-system(f3c105a9-c51e-4146-8f7b-cd2b2b3fff5c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:50:58.677264 kubelet[2807]: E0124 00:50:58.677191 2807 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dbqvx" podUID="f3c105a9-c51e-4146-8f7b-cd2b2b3fff5c" Jan 24 00:50:58.881951 kubelet[2807]: E0124 00:50:58.881860 2807 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dbqvx" podUID="f3c105a9-c51e-4146-8f7b-cd2b2b3fff5c" Jan 24 00:50:58.882415 kubelet[2807]: E0124 00:50:58.882160 2807 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-588dc98d7b-pw4d6" podUID="7db270eb-ad67-4a41-9877-6d27ca50c210" Jan 24 00:50:58.883228 kubelet[2807]: E0124 00:50:58.883198 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 24 00:50:58.883831 kubelet[2807]: E0124 00:50:58.883765 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 24 00:50:58.884706 kubelet[2807]: E0124 00:50:58.884674 2807 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-rczw4" podUID="ed711851-0b36-4791-909e-763bb4210e62" Jan 24 00:50:58.884774 kubelet[2807]: E0124 00:50:58.884741 2807 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b94bdbd9d-dvkkh" podUID="8e3373a2-955e-4988-8f5b-bfdd560d0dba" Jan 24 00:50:59.044000 audit[4876]: NETFILTER_CFG table=filter:136 family=2 entries=14 op=nft_register_rule pid=4876 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:50:59.045849 kernel: kauditd_printk_skb: 227 callbacks suppressed Jan 24 00:50:59.045915 kernel: audit: type=1325 audit(1769215859.044:758): table=filter:136 family=2 entries=14 op=nft_register_rule pid=4876 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:50:59.044000 audit[4876]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc46151e80 a2=0 a3=7ffc46151e6c items=0 ppid=2920 pid=4876 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:59.053258 kernel: audit: type=1300 audit(1769215859.044:758): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc46151e80 a2=0 a3=7ffc46151e6c items=0 ppid=2920 pid=4876 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:59.044000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:50:59.063174 kernel: audit: type=1327 audit(1769215859.044:758): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:50:59.066000 audit[4876]: NETFILTER_CFG table=nat:137 family=2 entries=56 op=nft_register_chain pid=4876 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:50:59.066000 audit[4876]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffc46151e80 a2=0 a3=7ffc46151e6c items=0 ppid=2920 pid=4876 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:59.077653 kernel: audit: type=1325 audit(1769215859.066:759): table=nat:137 family=2 entries=56 op=nft_register_chain pid=4876 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:50:59.077741 kernel: audit: type=1300 audit(1769215859.066:759): arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffc46151e80 a2=0 a3=7ffc46151e6c items=0 ppid=2920 pid=4876 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:50:59.066000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:50:59.085859 systemd-networkd[1499]: calibe42bb70c98: Gained IPv6LL Jan 24 00:50:59.088918 kernel: audit: type=1327 audit(1769215859.066:759): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:50:59.341215 systemd-networkd[1499]: caliadda9970295: Gained IPv6LL Jan 24 00:50:59.597330 systemd-networkd[1499]: cali4eacc1c94d5: Gained IPv6LL Jan 24 00:50:59.886460 kubelet[2807]: E0124 00:50:59.886355 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 24 00:50:59.888323 kubelet[2807]: E0124 00:50:59.887828 2807 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-588dc98d7b-pw4d6" podUID="7db270eb-ad67-4a41-9877-6d27ca50c210" Jan 24 00:50:59.888323 kubelet[2807]: E0124 00:50:59.887926 2807 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b94bdbd9d-dvkkh" podUID="8e3373a2-955e-4988-8f5b-bfdd560d0dba" Jan 24 00:50:59.888516 kubelet[2807]: E0124 00:50:59.888438 2807 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dbqvx" podUID="f3c105a9-c51e-4146-8f7b-cd2b2b3fff5c" Jan 24 00:51:00.698889 containerd[1593]: time="2026-01-24T00:51:00.698609855Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:51:00.836630 containerd[1593]: time="2026-01-24T00:51:00.836573903Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:51:00.838018 containerd[1593]: time="2026-01-24T00:51:00.837975924Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:51:00.838062 containerd[1593]: time="2026-01-24T00:51:00.838047244Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 24 00:51:00.838286 kubelet[2807]: E0124 00:51:00.838216 2807 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:51:00.838286 kubelet[2807]: E0124 00:51:00.838263 2807 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:51:00.838456 kubelet[2807]: E0124 00:51:00.838329 2807 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-57bd667b79-zp7m5_calico-system(6b257eba-5f3e-4c58-93ff-911ee6aa75db): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:51:00.839926 containerd[1593]: time="2026-01-24T00:51:00.839897926Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:51:00.965417 containerd[1593]: time="2026-01-24T00:51:00.965280903Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:51:00.966788 containerd[1593]: time="2026-01-24T00:51:00.966689774Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:51:00.966788 containerd[1593]: time="2026-01-24T00:51:00.966726174Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 24 00:51:00.967204 kubelet[2807]: E0124 00:51:00.966994 2807 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:51:00.967204 kubelet[2807]: E0124 00:51:00.967050 2807 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:51:00.967204 kubelet[2807]: E0124 00:51:00.967107 2807 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-57bd667b79-zp7m5_calico-system(6b257eba-5f3e-4c58-93ff-911ee6aa75db): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:51:00.967204 kubelet[2807]: E0124 00:51:00.967143 2807 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-57bd667b79-zp7m5" podUID="6b257eba-5f3e-4c58-93ff-911ee6aa75db" Jan 24 00:51:09.698535 containerd[1593]: time="2026-01-24T00:51:09.698074507Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:51:09.833112 containerd[1593]: time="2026-01-24T00:51:09.832813711Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:51:09.834807 containerd[1593]: time="2026-01-24T00:51:09.833734632Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:51:09.834894 containerd[1593]: time="2026-01-24T00:51:09.834877172Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 24 00:51:09.835269 kubelet[2807]: E0124 00:51:09.835207 2807 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:51:09.835269 kubelet[2807]: E0124 00:51:09.835252 2807 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:51:09.836086 kubelet[2807]: E0124 00:51:09.835754 2807 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-rczw4_calico-system(ed711851-0b36-4791-909e-763bb4210e62): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:51:09.836178 kubelet[2807]: E0124 00:51:09.836154 2807 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-rczw4" podUID="ed711851-0b36-4791-909e-763bb4210e62" Jan 24 00:51:10.700595 containerd[1593]: time="2026-01-24T00:51:10.700389880Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:51:10.833317 containerd[1593]: time="2026-01-24T00:51:10.833132670Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:51:10.834201 containerd[1593]: time="2026-01-24T00:51:10.834098851Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:51:10.834201 containerd[1593]: time="2026-01-24T00:51:10.834168841Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 24 00:51:10.834519 kubelet[2807]: E0124 00:51:10.834457 2807 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:51:10.834574 kubelet[2807]: E0124 00:51:10.834542 2807 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:51:10.834721 kubelet[2807]: E0124 00:51:10.834660 2807 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-588dc98d7b-7slqw_calico-apiserver(7daaf6bf-447d-4008-8095-834f38ae42be): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:51:10.834831 kubelet[2807]: E0124 00:51:10.834717 2807 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-588dc98d7b-7slqw" podUID="7daaf6bf-447d-4008-8095-834f38ae42be" Jan 24 00:51:12.699388 containerd[1593]: time="2026-01-24T00:51:12.699331274Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:51:12.826251 containerd[1593]: time="2026-01-24T00:51:12.826191460Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:51:12.827419 containerd[1593]: time="2026-01-24T00:51:12.827386210Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:51:12.827843 containerd[1593]: time="2026-01-24T00:51:12.827401100Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 24 00:51:12.827899 kubelet[2807]: E0124 00:51:12.827648 2807 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:51:12.827899 kubelet[2807]: E0124 00:51:12.827703 2807 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:51:12.829738 kubelet[2807]: E0124 00:51:12.828578 2807 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-dbqvx_calico-system(f3c105a9-c51e-4146-8f7b-cd2b2b3fff5c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:51:12.829880 containerd[1593]: time="2026-01-24T00:51:12.828720591Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:51:12.963131 containerd[1593]: time="2026-01-24T00:51:12.962947560Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:51:12.964380 containerd[1593]: time="2026-01-24T00:51:12.964314441Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:51:12.964380 containerd[1593]: time="2026-01-24T00:51:12.964352111Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 24 00:51:12.964531 kubelet[2807]: E0124 00:51:12.964487 2807 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:51:12.964531 kubelet[2807]: E0124 00:51:12.964524 2807 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:51:12.964712 kubelet[2807]: E0124 00:51:12.964667 2807 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-588dc98d7b-pw4d6_calico-apiserver(7db270eb-ad67-4a41-9877-6d27ca50c210): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:51:12.964777 kubelet[2807]: E0124 00:51:12.964710 2807 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-588dc98d7b-pw4d6" podUID="7db270eb-ad67-4a41-9877-6d27ca50c210" Jan 24 00:51:12.965332 containerd[1593]: time="2026-01-24T00:51:12.965305741Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:51:13.100106 containerd[1593]: time="2026-01-24T00:51:13.099915120Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:51:13.101072 containerd[1593]: time="2026-01-24T00:51:13.101047551Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:51:13.101243 containerd[1593]: time="2026-01-24T00:51:13.101141351Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 24 00:51:13.101643 kubelet[2807]: E0124 00:51:13.101584 2807 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:51:13.101695 kubelet[2807]: E0124 00:51:13.101653 2807 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:51:13.103104 kubelet[2807]: E0124 00:51:13.101758 2807 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-dbqvx_calico-system(f3c105a9-c51e-4146-8f7b-cd2b2b3fff5c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:51:13.103549 kubelet[2807]: E0124 00:51:13.103513 2807 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dbqvx" podUID="f3c105a9-c51e-4146-8f7b-cd2b2b3fff5c" Jan 24 00:51:14.698952 containerd[1593]: time="2026-01-24T00:51:14.698427429Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:51:14.846475 containerd[1593]: time="2026-01-24T00:51:14.846405304Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:51:14.847412 containerd[1593]: time="2026-01-24T00:51:14.847354045Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:51:14.847463 containerd[1593]: time="2026-01-24T00:51:14.847451865Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 24 00:51:14.848820 kubelet[2807]: E0124 00:51:14.847736 2807 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:51:14.849408 kubelet[2807]: E0124 00:51:14.849201 2807 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:51:14.849408 kubelet[2807]: E0124 00:51:14.849324 2807 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-7b94bdbd9d-dvkkh_calico-system(8e3373a2-955e-4988-8f5b-bfdd560d0dba): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:51:14.849408 kubelet[2807]: E0124 00:51:14.849364 2807 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b94bdbd9d-dvkkh" podUID="8e3373a2-955e-4988-8f5b-bfdd560d0dba" Jan 24 00:51:15.701006 kubelet[2807]: E0124 00:51:15.700945 2807 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-57bd667b79-zp7m5" podUID="6b257eba-5f3e-4c58-93ff-911ee6aa75db" Jan 24 00:51:22.702429 kubelet[2807]: E0124 00:51:22.702093 2807 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-588dc98d7b-7slqw" podUID="7daaf6bf-447d-4008-8095-834f38ae42be" Jan 24 00:51:23.698578 kubelet[2807]: E0124 00:51:23.698359 2807 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-rczw4" podUID="ed711851-0b36-4791-909e-763bb4210e62" Jan 24 00:51:24.704273 kubelet[2807]: E0124 00:51:24.704231 2807 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dbqvx" podUID="f3c105a9-c51e-4146-8f7b-cd2b2b3fff5c" Jan 24 00:51:26.701886 containerd[1593]: time="2026-01-24T00:51:26.701835649Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:51:26.836006 containerd[1593]: time="2026-01-24T00:51:26.835943375Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:51:26.837275 containerd[1593]: time="2026-01-24T00:51:26.837233816Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:51:26.837331 containerd[1593]: time="2026-01-24T00:51:26.837306246Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 24 00:51:26.837524 kubelet[2807]: E0124 00:51:26.837491 2807 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:51:26.837906 kubelet[2807]: E0124 00:51:26.837535 2807 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:51:26.837906 kubelet[2807]: E0124 00:51:26.837639 2807 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-57bd667b79-zp7m5_calico-system(6b257eba-5f3e-4c58-93ff-911ee6aa75db): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:51:26.839230 containerd[1593]: time="2026-01-24T00:51:26.839193407Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:51:26.961065 containerd[1593]: time="2026-01-24T00:51:26.960715196Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:51:26.961755 containerd[1593]: time="2026-01-24T00:51:26.961728387Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:51:26.961860 containerd[1593]: time="2026-01-24T00:51:26.961811037Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 24 00:51:26.962051 kubelet[2807]: E0124 00:51:26.962012 2807 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:51:26.962100 kubelet[2807]: E0124 00:51:26.962058 2807 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:51:26.962135 kubelet[2807]: E0124 00:51:26.962126 2807 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-57bd667b79-zp7m5_calico-system(6b257eba-5f3e-4c58-93ff-911ee6aa75db): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:51:26.962190 kubelet[2807]: E0124 00:51:26.962162 2807 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-57bd667b79-zp7m5" podUID="6b257eba-5f3e-4c58-93ff-911ee6aa75db" Jan 24 00:51:27.699511 kubelet[2807]: E0124 00:51:27.699443 2807 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-588dc98d7b-pw4d6" podUID="7db270eb-ad67-4a41-9877-6d27ca50c210" Jan 24 00:51:27.699734 kubelet[2807]: E0124 00:51:27.699663 2807 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b94bdbd9d-dvkkh" podUID="8e3373a2-955e-4988-8f5b-bfdd560d0dba" Jan 24 00:51:33.699289 containerd[1593]: time="2026-01-24T00:51:33.699227616Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:51:33.713752 kubelet[2807]: E0124 00:51:33.713717 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 24 00:51:33.832158 containerd[1593]: time="2026-01-24T00:51:33.832094627Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:51:33.834927 containerd[1593]: time="2026-01-24T00:51:33.834864885Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:51:33.835165 containerd[1593]: time="2026-01-24T00:51:33.835148673Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 24 00:51:33.835365 kubelet[2807]: E0124 00:51:33.835330 2807 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:51:33.835409 kubelet[2807]: E0124 00:51:33.835372 2807 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:51:33.835445 kubelet[2807]: E0124 00:51:33.835433 2807 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-588dc98d7b-7slqw_calico-apiserver(7daaf6bf-447d-4008-8095-834f38ae42be): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:51:33.835488 kubelet[2807]: E0124 00:51:33.835462 2807 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-588dc98d7b-7slqw" podUID="7daaf6bf-447d-4008-8095-834f38ae42be" Jan 24 00:51:35.700697 containerd[1593]: time="2026-01-24T00:51:35.700642380Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:51:36.044608 containerd[1593]: time="2026-01-24T00:51:36.044131626Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:51:36.045336 containerd[1593]: time="2026-01-24T00:51:36.045291320Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:51:36.045491 containerd[1593]: time="2026-01-24T00:51:36.045452290Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 24 00:51:36.045827 kubelet[2807]: E0124 00:51:36.045737 2807 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:51:36.047306 kubelet[2807]: E0124 00:51:36.045852 2807 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:51:36.047341 containerd[1593]: time="2026-01-24T00:51:36.046374036Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:51:36.048078 kubelet[2807]: E0124 00:51:36.047469 2807 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-dbqvx_calico-system(f3c105a9-c51e-4146-8f7b-cd2b2b3fff5c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:51:36.174393 containerd[1593]: time="2026-01-24T00:51:36.174326111Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:51:36.175669 containerd[1593]: time="2026-01-24T00:51:36.175637105Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:51:36.175738 containerd[1593]: time="2026-01-24T00:51:36.175714305Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 24 00:51:36.176117 kubelet[2807]: E0124 00:51:36.175921 2807 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:51:36.176117 kubelet[2807]: E0124 00:51:36.175971 2807 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:51:36.176218 kubelet[2807]: E0124 00:51:36.176136 2807 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-rczw4_calico-system(ed711851-0b36-4791-909e-763bb4210e62): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:51:36.176218 kubelet[2807]: E0124 00:51:36.176178 2807 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-rczw4" podUID="ed711851-0b36-4791-909e-763bb4210e62" Jan 24 00:51:36.176592 containerd[1593]: time="2026-01-24T00:51:36.176454213Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:51:36.311363 containerd[1593]: time="2026-01-24T00:51:36.311321148Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:51:36.312457 containerd[1593]: time="2026-01-24T00:51:36.312338094Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:51:36.312457 containerd[1593]: time="2026-01-24T00:51:36.312375194Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 24 00:51:36.312674 kubelet[2807]: E0124 00:51:36.312636 2807 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:51:36.312730 kubelet[2807]: E0124 00:51:36.312686 2807 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:51:36.312858 kubelet[2807]: E0124 00:51:36.312753 2807 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-dbqvx_calico-system(f3c105a9-c51e-4146-8f7b-cd2b2b3fff5c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:51:36.312900 kubelet[2807]: E0124 00:51:36.312873 2807 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dbqvx" podUID="f3c105a9-c51e-4146-8f7b-cd2b2b3fff5c" Jan 24 00:51:37.699900 kubelet[2807]: E0124 00:51:37.699828 2807 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-57bd667b79-zp7m5" podUID="6b257eba-5f3e-4c58-93ff-911ee6aa75db" Jan 24 00:51:39.698318 kubelet[2807]: E0124 00:51:39.696934 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 24 00:51:40.701214 kubelet[2807]: E0124 00:51:40.700972 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 24 00:51:41.701000 containerd[1593]: time="2026-01-24T00:51:41.700736811Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:51:41.834301 containerd[1593]: time="2026-01-24T00:51:41.834250853Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:51:41.836933 containerd[1593]: time="2026-01-24T00:51:41.836859903Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:51:41.837202 containerd[1593]: time="2026-01-24T00:51:41.836955273Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 24 00:51:41.837494 kubelet[2807]: E0124 00:51:41.837430 2807 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:51:41.838726 kubelet[2807]: E0124 00:51:41.837839 2807 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:51:41.838726 kubelet[2807]: E0124 00:51:41.837961 2807 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-7b94bdbd9d-dvkkh_calico-system(8e3373a2-955e-4988-8f5b-bfdd560d0dba): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:51:41.838726 kubelet[2807]: E0124 00:51:41.838599 2807 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b94bdbd9d-dvkkh" podUID="8e3373a2-955e-4988-8f5b-bfdd560d0dba" Jan 24 00:51:42.699781 containerd[1593]: time="2026-01-24T00:51:42.699305259Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:51:42.842739 containerd[1593]: time="2026-01-24T00:51:42.842637221Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:51:42.844093 containerd[1593]: time="2026-01-24T00:51:42.844055305Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:51:42.844192 containerd[1593]: time="2026-01-24T00:51:42.844163605Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 24 00:51:42.844474 kubelet[2807]: E0124 00:51:42.844401 2807 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:51:42.844474 kubelet[2807]: E0124 00:51:42.844450 2807 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:51:42.845391 kubelet[2807]: E0124 00:51:42.844530 2807 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-588dc98d7b-pw4d6_calico-apiserver(7db270eb-ad67-4a41-9877-6d27ca50c210): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:51:42.845391 kubelet[2807]: E0124 00:51:42.844580 2807 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-588dc98d7b-pw4d6" podUID="7db270eb-ad67-4a41-9877-6d27ca50c210" Jan 24 00:51:44.699263 kubelet[2807]: E0124 00:51:44.698929 2807 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-588dc98d7b-7slqw" podUID="7daaf6bf-447d-4008-8095-834f38ae42be" Jan 24 00:51:48.706503 kubelet[2807]: E0124 00:51:48.706437 2807 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-57bd667b79-zp7m5" podUID="6b257eba-5f3e-4c58-93ff-911ee6aa75db" Jan 24 00:51:49.699013 kubelet[2807]: E0124 00:51:49.698970 2807 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-rczw4" podUID="ed711851-0b36-4791-909e-763bb4210e62" Jan 24 00:51:49.699887 kubelet[2807]: E0124 00:51:49.699747 2807 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dbqvx" podUID="f3c105a9-c51e-4146-8f7b-cd2b2b3fff5c" Jan 24 00:51:54.701020 kubelet[2807]: E0124 00:51:54.700597 2807 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b94bdbd9d-dvkkh" podUID="8e3373a2-955e-4988-8f5b-bfdd560d0dba" Jan 24 00:51:56.447613 update_engine[1570]: I20260124 00:51:56.447022 1570 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 24 00:51:56.447613 update_engine[1570]: I20260124 00:51:56.447078 1570 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 24 00:51:56.447613 update_engine[1570]: I20260124 00:51:56.447333 1570 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 24 00:51:56.449130 update_engine[1570]: I20260124 00:51:56.449002 1570 omaha_request_params.cc:62] Current group set to alpha Jan 24 00:51:56.450063 update_engine[1570]: I20260124 00:51:56.450038 1570 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 24 00:51:56.450935 update_engine[1570]: I20260124 00:51:56.450487 1570 update_attempter.cc:643] Scheduling an action processor start. Jan 24 00:51:56.450935 update_engine[1570]: I20260124 00:51:56.450521 1570 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 24 00:51:56.457831 update_engine[1570]: I20260124 00:51:56.457790 1570 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 24 00:51:56.457968 locksmithd[1613]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 24 00:51:56.458170 update_engine[1570]: I20260124 00:51:56.457938 1570 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 24 00:51:56.458823 update_engine[1570]: I20260124 00:51:56.458206 1570 omaha_request_action.cc:272] Request: Jan 24 00:51:56.458823 update_engine[1570]: Jan 24 00:51:56.458823 update_engine[1570]: Jan 24 00:51:56.458823 update_engine[1570]: Jan 24 00:51:56.458823 update_engine[1570]: Jan 24 00:51:56.458823 update_engine[1570]: Jan 24 00:51:56.458823 update_engine[1570]: Jan 24 00:51:56.458823 update_engine[1570]: Jan 24 00:51:56.458823 update_engine[1570]: Jan 24 00:51:56.458823 update_engine[1570]: I20260124 00:51:56.458222 1570 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 24 00:51:56.461446 update_engine[1570]: I20260124 00:51:56.461048 1570 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 24 00:51:56.462107 update_engine[1570]: I20260124 00:51:56.462067 1570 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 24 00:51:56.462777 update_engine[1570]: E20260124 00:51:56.462755 1570 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 24 00:51:56.462894 update_engine[1570]: I20260124 00:51:56.462877 1570 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 24 00:51:56.700125 kubelet[2807]: E0124 00:51:56.699202 2807 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-588dc98d7b-pw4d6" podUID="7db270eb-ad67-4a41-9877-6d27ca50c210" Jan 24 00:51:59.699079 kubelet[2807]: E0124 00:51:59.699030 2807 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-588dc98d7b-7slqw" podUID="7daaf6bf-447d-4008-8095-834f38ae42be" Jan 24 00:51:59.700569 kubelet[2807]: E0124 00:51:59.700526 2807 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-57bd667b79-zp7m5" podUID="6b257eba-5f3e-4c58-93ff-911ee6aa75db" Jan 24 00:52:00.697503 kubelet[2807]: E0124 00:52:00.697418 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 24 00:52:01.697825 kubelet[2807]: E0124 00:52:01.697726 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 24 00:52:01.699005 kubelet[2807]: E0124 00:52:01.698883 2807 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-rczw4" podUID="ed711851-0b36-4791-909e-763bb4210e62" Jan 24 00:52:03.699230 kubelet[2807]: E0124 00:52:03.699170 2807 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dbqvx" podUID="f3c105a9-c51e-4146-8f7b-cd2b2b3fff5c" Jan 24 00:52:05.698661 kubelet[2807]: E0124 00:52:05.697383 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 24 00:52:06.398822 update_engine[1570]: I20260124 00:52:06.398726 1570 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 24 00:52:06.400458 update_engine[1570]: I20260124 00:52:06.398846 1570 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 24 00:52:06.400458 update_engine[1570]: I20260124 00:52:06.399237 1570 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 24 00:52:06.400458 update_engine[1570]: E20260124 00:52:06.400075 1570 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 24 00:52:06.400458 update_engine[1570]: I20260124 00:52:06.400125 1570 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 24 00:52:06.698873 kubelet[2807]: E0124 00:52:06.698239 2807 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b94bdbd9d-dvkkh" podUID="8e3373a2-955e-4988-8f5b-bfdd560d0dba" Jan 24 00:52:08.127943 systemd[1]: Started sshd@9-172.239.50.209:22-54.159.252.252:60900.service - OpenSSH per-connection server daemon (54.159.252.252:60900). Jan 24 00:52:08.127000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.239.50.209:22-54.159.252.252:60900 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:52:08.137818 kernel: audit: type=1130 audit(1769215928.127:760): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.239.50.209:22-54.159.252.252:60900 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:52:10.444907 sshd[4967]: Connection closed by 54.159.252.252 port 60900 [preauth] Jan 24 00:52:10.447922 systemd[1]: sshd@9-172.239.50.209:22-54.159.252.252:60900.service: Deactivated successfully. Jan 24 00:52:10.447000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.239.50.209:22-54.159.252.252:60900 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:52:10.455833 kernel: audit: type=1131 audit(1769215930.447:761): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.239.50.209:22-54.159.252.252:60900 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:52:10.700834 kubelet[2807]: E0124 00:52:10.700685 2807 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-588dc98d7b-pw4d6" podUID="7db270eb-ad67-4a41-9877-6d27ca50c210" Jan 24 00:52:10.702353 containerd[1593]: time="2026-01-24T00:52:10.702315176Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:52:10.843085 containerd[1593]: time="2026-01-24T00:52:10.843034275Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:52:10.844169 containerd[1593]: time="2026-01-24T00:52:10.844135314Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:52:10.844229 containerd[1593]: time="2026-01-24T00:52:10.844210463Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 24 00:52:10.844409 kubelet[2807]: E0124 00:52:10.844374 2807 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:52:10.844509 kubelet[2807]: E0124 00:52:10.844494 2807 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:52:10.844805 kubelet[2807]: E0124 00:52:10.844633 2807 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-57bd667b79-zp7m5_calico-system(6b257eba-5f3e-4c58-93ff-911ee6aa75db): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:52:10.846741 containerd[1593]: time="2026-01-24T00:52:10.846715079Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:52:10.983995 containerd[1593]: time="2026-01-24T00:52:10.983666064Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:52:10.984540 containerd[1593]: time="2026-01-24T00:52:10.984505163Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:52:10.984702 containerd[1593]: time="2026-01-24T00:52:10.984548263Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 24 00:52:10.984930 kubelet[2807]: E0124 00:52:10.984860 2807 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:52:10.984989 kubelet[2807]: E0124 00:52:10.984943 2807 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:52:10.985116 kubelet[2807]: E0124 00:52:10.985080 2807 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-57bd667b79-zp7m5_calico-system(6b257eba-5f3e-4c58-93ff-911ee6aa75db): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:52:10.985188 kubelet[2807]: E0124 00:52:10.985140 2807 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-57bd667b79-zp7m5" podUID="6b257eba-5f3e-4c58-93ff-911ee6aa75db" Jan 24 00:52:12.699746 kubelet[2807]: E0124 00:52:12.699528 2807 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-588dc98d7b-7slqw" podUID="7daaf6bf-447d-4008-8095-834f38ae42be" Jan 24 00:52:14.702416 kubelet[2807]: E0124 00:52:14.699723 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 24 00:52:14.702416 kubelet[2807]: E0124 00:52:14.701353 2807 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-rczw4" podUID="ed711851-0b36-4791-909e-763bb4210e62" Jan 24 00:52:14.702416 kubelet[2807]: E0124 00:52:14.701688 2807 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dbqvx" podUID="f3c105a9-c51e-4146-8f7b-cd2b2b3fff5c" Jan 24 00:52:16.398486 update_engine[1570]: I20260124 00:52:16.398389 1570 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 24 00:52:16.399014 update_engine[1570]: I20260124 00:52:16.398506 1570 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 24 00:52:16.399014 update_engine[1570]: I20260124 00:52:16.398919 1570 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 24 00:52:16.399523 update_engine[1570]: E20260124 00:52:16.399482 1570 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 24 00:52:16.399557 update_engine[1570]: I20260124 00:52:16.399533 1570 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 24 00:52:17.698142 kubelet[2807]: E0124 00:52:17.698096 2807 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b94bdbd9d-dvkkh" podUID="8e3373a2-955e-4988-8f5b-bfdd560d0dba" Jan 24 00:52:19.697824 kubelet[2807]: E0124 00:52:19.697040 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 24 00:52:19.766458 systemd[1]: Started sshd@10-172.239.50.209:22-68.220.241.50:36172.service - OpenSSH per-connection server daemon (68.220.241.50:36172). Jan 24 00:52:19.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.239.50.209:22-68.220.241.50:36172 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:52:19.773820 kernel: audit: type=1130 audit(1769215939.765:762): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.239.50.209:22-68.220.241.50:36172 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:52:19.920000 audit[4987]: USER_ACCT pid=4987 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:19.921409 sshd[4987]: Accepted publickey for core from 68.220.241.50 port 36172 ssh2: RSA SHA256:IRjesLrG0hOviko3cFhvj9lmkvOjb9NLdxgdQ3AAleE Jan 24 00:52:19.924268 sshd-session[4987]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:52:19.928829 kernel: audit: type=1101 audit(1769215939.920:763): pid=4987 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:19.922000 audit[4987]: CRED_ACQ pid=4987 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:19.934288 systemd-logind[1569]: New session 9 of user core. Jan 24 00:52:19.937340 kernel: audit: type=1103 audit(1769215939.922:764): pid=4987 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:19.937392 kernel: audit: type=1006 audit(1769215939.922:765): pid=4987 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Jan 24 00:52:19.922000 audit[4987]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdd3c0f310 a2=3 a3=0 items=0 ppid=1 pid=4987 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:52:19.941201 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 24 00:52:19.942219 kernel: audit: type=1300 audit(1769215939.922:765): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdd3c0f310 a2=3 a3=0 items=0 ppid=1 pid=4987 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:52:19.922000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:52:19.948845 kernel: audit: type=1327 audit(1769215939.922:765): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:52:19.946000 audit[4987]: USER_START pid=4987 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:19.952824 kernel: audit: type=1105 audit(1769215939.946:766): pid=4987 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:19.950000 audit[4991]: CRED_ACQ pid=4991 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:19.960886 kernel: audit: type=1103 audit(1769215939.950:767): pid=4991 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:20.068940 sshd[4991]: Connection closed by 68.220.241.50 port 36172 Jan 24 00:52:20.070876 sshd-session[4987]: pam_unix(sshd:session): session closed for user core Jan 24 00:52:20.071000 audit[4987]: USER_END pid=4987 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:20.076236 systemd[1]: sshd@10-172.239.50.209:22-68.220.241.50:36172.service: Deactivated successfully. Jan 24 00:52:20.078168 systemd[1]: session-9.scope: Deactivated successfully. Jan 24 00:52:20.081854 kernel: audit: type=1106 audit(1769215940.071:768): pid=4987 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:20.082058 systemd-logind[1569]: Session 9 logged out. Waiting for processes to exit. Jan 24 00:52:20.084388 systemd-logind[1569]: Removed session 9. Jan 24 00:52:20.072000 audit[4987]: CRED_DISP pid=4987 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:20.075000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.239.50.209:22-68.220.241.50:36172 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:52:20.092815 kernel: audit: type=1104 audit(1769215940.072:769): pid=4987 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:22.698163 kubelet[2807]: E0124 00:52:22.698115 2807 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-588dc98d7b-pw4d6" podUID="7db270eb-ad67-4a41-9877-6d27ca50c210" Jan 24 00:52:23.699783 kubelet[2807]: E0124 00:52:23.699730 2807 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-57bd667b79-zp7m5" podUID="6b257eba-5f3e-4c58-93ff-911ee6aa75db" Jan 24 00:52:25.106446 systemd[1]: Started sshd@11-172.239.50.209:22-68.220.241.50:46220.service - OpenSSH per-connection server daemon (68.220.241.50:46220). Jan 24 00:52:25.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.239.50.209:22-68.220.241.50:46220 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:52:25.109122 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 00:52:25.109189 kernel: audit: type=1130 audit(1769215945.106:771): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.239.50.209:22-68.220.241.50:46220 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:52:25.280906 kernel: audit: type=1101 audit(1769215945.271:772): pid=5020 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:25.271000 audit[5020]: USER_ACCT pid=5020 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:25.281048 sshd[5020]: Accepted publickey for core from 68.220.241.50 port 46220 ssh2: RSA SHA256:IRjesLrG0hOviko3cFhvj9lmkvOjb9NLdxgdQ3AAleE Jan 24 00:52:25.283355 sshd-session[5020]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:52:25.281000 audit[5020]: CRED_ACQ pid=5020 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:25.295932 kernel: audit: type=1103 audit(1769215945.281:773): pid=5020 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:25.295972 kernel: audit: type=1006 audit(1769215945.281:774): pid=5020 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jan 24 00:52:25.281000 audit[5020]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdfef5d530 a2=3 a3=0 items=0 ppid=1 pid=5020 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:52:25.307514 kernel: audit: type=1300 audit(1769215945.281:774): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdfef5d530 a2=3 a3=0 items=0 ppid=1 pid=5020 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:52:25.307558 kernel: audit: type=1327 audit(1769215945.281:774): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:52:25.281000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:52:25.309315 systemd-logind[1569]: New session 10 of user core. Jan 24 00:52:25.315040 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 24 00:52:25.319000 audit[5020]: USER_START pid=5020 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:25.329822 kernel: audit: type=1105 audit(1769215945.319:775): pid=5020 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:25.321000 audit[5024]: CRED_ACQ pid=5024 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:25.337833 kernel: audit: type=1103 audit(1769215945.321:776): pid=5024 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:25.455433 sshd[5024]: Connection closed by 68.220.241.50 port 46220 Jan 24 00:52:25.456619 sshd-session[5020]: pam_unix(sshd:session): session closed for user core Jan 24 00:52:25.460000 audit[5020]: USER_END pid=5020 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:25.472824 kernel: audit: type=1106 audit(1769215945.460:777): pid=5020 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:25.473853 systemd[1]: sshd@11-172.239.50.209:22-68.220.241.50:46220.service: Deactivated successfully. Jan 24 00:52:25.475125 systemd-logind[1569]: Session 10 logged out. Waiting for processes to exit. Jan 24 00:52:25.479267 systemd[1]: session-10.scope: Deactivated successfully. Jan 24 00:52:25.483313 systemd-logind[1569]: Removed session 10. Jan 24 00:52:25.460000 audit[5020]: CRED_DISP pid=5020 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:25.515871 kernel: audit: type=1104 audit(1769215945.460:778): pid=5020 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:25.473000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.239.50.209:22-68.220.241.50:46220 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:52:26.388947 update_engine[1570]: I20260124 00:52:26.388850 1570 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 24 00:52:26.389488 update_engine[1570]: I20260124 00:52:26.388975 1570 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 24 00:52:26.389488 update_engine[1570]: I20260124 00:52:26.389367 1570 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 24 00:52:26.390214 update_engine[1570]: E20260124 00:52:26.390177 1570 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 24 00:52:26.390272 update_engine[1570]: I20260124 00:52:26.390223 1570 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 24 00:52:26.390272 update_engine[1570]: I20260124 00:52:26.390234 1570 omaha_request_action.cc:617] Omaha request response: Jan 24 00:52:26.390358 update_engine[1570]: E20260124 00:52:26.390327 1570 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 24 00:52:26.390385 update_engine[1570]: I20260124 00:52:26.390355 1570 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 24 00:52:26.390385 update_engine[1570]: I20260124 00:52:26.390363 1570 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 24 00:52:26.390385 update_engine[1570]: I20260124 00:52:26.390370 1570 update_attempter.cc:306] Processing Done. Jan 24 00:52:26.390465 update_engine[1570]: E20260124 00:52:26.390386 1570 update_attempter.cc:619] Update failed. Jan 24 00:52:26.390465 update_engine[1570]: I20260124 00:52:26.390393 1570 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 24 00:52:26.390465 update_engine[1570]: I20260124 00:52:26.390400 1570 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 24 00:52:26.390465 update_engine[1570]: I20260124 00:52:26.390408 1570 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 24 00:52:26.390559 update_engine[1570]: I20260124 00:52:26.390472 1570 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 24 00:52:26.390559 update_engine[1570]: I20260124 00:52:26.390492 1570 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 24 00:52:26.390559 update_engine[1570]: I20260124 00:52:26.390499 1570 omaha_request_action.cc:272] Request: Jan 24 00:52:26.390559 update_engine[1570]: Jan 24 00:52:26.390559 update_engine[1570]: Jan 24 00:52:26.390559 update_engine[1570]: Jan 24 00:52:26.390559 update_engine[1570]: Jan 24 00:52:26.390559 update_engine[1570]: Jan 24 00:52:26.390559 update_engine[1570]: Jan 24 00:52:26.390559 update_engine[1570]: I20260124 00:52:26.390506 1570 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 24 00:52:26.390559 update_engine[1570]: I20260124 00:52:26.390528 1570 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 24 00:52:26.390835 update_engine[1570]: I20260124 00:52:26.390789 1570 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 24 00:52:26.391178 locksmithd[1613]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 24 00:52:26.391594 update_engine[1570]: E20260124 00:52:26.391556 1570 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 24 00:52:26.391629 update_engine[1570]: I20260124 00:52:26.391600 1570 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 24 00:52:26.391629 update_engine[1570]: I20260124 00:52:26.391610 1570 omaha_request_action.cc:617] Omaha request response: Jan 24 00:52:26.391629 update_engine[1570]: I20260124 00:52:26.391618 1570 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 24 00:52:26.391629 update_engine[1570]: I20260124 00:52:26.391624 1570 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 24 00:52:26.391720 update_engine[1570]: I20260124 00:52:26.391630 1570 update_attempter.cc:306] Processing Done. Jan 24 00:52:26.391720 update_engine[1570]: I20260124 00:52:26.391638 1570 update_attempter.cc:310] Error event sent. Jan 24 00:52:26.391720 update_engine[1570]: I20260124 00:52:26.391647 1570 update_check_scheduler.cc:74] Next update check in 45m43s Jan 24 00:52:26.391942 locksmithd[1613]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 24 00:52:26.703639 containerd[1593]: time="2026-01-24T00:52:26.702818701Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:52:27.033362 containerd[1593]: time="2026-01-24T00:52:27.033211559Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:52:27.035111 containerd[1593]: time="2026-01-24T00:52:27.035074618Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:52:27.035182 containerd[1593]: time="2026-01-24T00:52:27.035151408Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 24 00:52:27.035449 kubelet[2807]: E0124 00:52:27.035383 2807 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:52:27.036218 kubelet[2807]: E0124 00:52:27.035455 2807 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:52:27.036218 kubelet[2807]: E0124 00:52:27.035718 2807 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-588dc98d7b-7slqw_calico-apiserver(7daaf6bf-447d-4008-8095-834f38ae42be): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:52:27.036218 kubelet[2807]: E0124 00:52:27.035869 2807 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-588dc98d7b-7slqw" podUID="7daaf6bf-447d-4008-8095-834f38ae42be" Jan 24 00:52:27.036440 containerd[1593]: time="2026-01-24T00:52:27.036058896Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:52:27.179680 containerd[1593]: time="2026-01-24T00:52:27.179501515Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:52:27.181678 containerd[1593]: time="2026-01-24T00:52:27.181448043Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:52:27.181977 containerd[1593]: time="2026-01-24T00:52:27.181629523Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 24 00:52:27.182552 kubelet[2807]: E0124 00:52:27.182419 2807 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:52:27.182552 kubelet[2807]: E0124 00:52:27.182488 2807 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:52:27.183497 containerd[1593]: time="2026-01-24T00:52:27.183263562Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:52:27.183699 kubelet[2807]: E0124 00:52:27.183129 2807 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-dbqvx_calico-system(f3c105a9-c51e-4146-8f7b-cd2b2b3fff5c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:52:27.309961 containerd[1593]: time="2026-01-24T00:52:27.309325135Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:52:27.312711 containerd[1593]: time="2026-01-24T00:52:27.312586982Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:52:27.312711 containerd[1593]: time="2026-01-24T00:52:27.312686622Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 24 00:52:27.312986 kubelet[2807]: E0124 00:52:27.312929 2807 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:52:27.313109 kubelet[2807]: E0124 00:52:27.313088 2807 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:52:27.313556 kubelet[2807]: E0124 00:52:27.313263 2807 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-rczw4_calico-system(ed711851-0b36-4791-909e-763bb4210e62): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:52:27.313556 kubelet[2807]: E0124 00:52:27.313296 2807 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-rczw4" podUID="ed711851-0b36-4791-909e-763bb4210e62" Jan 24 00:52:27.313939 containerd[1593]: time="2026-01-24T00:52:27.313924351Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:52:27.442665 containerd[1593]: time="2026-01-24T00:52:27.442593052Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:52:27.447295 containerd[1593]: time="2026-01-24T00:52:27.447266169Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:52:27.447435 containerd[1593]: time="2026-01-24T00:52:27.447402609Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 24 00:52:27.447992 kubelet[2807]: E0124 00:52:27.447595 2807 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:52:27.447992 kubelet[2807]: E0124 00:52:27.447654 2807 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:52:27.447992 kubelet[2807]: E0124 00:52:27.447712 2807 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-dbqvx_calico-system(f3c105a9-c51e-4146-8f7b-cd2b2b3fff5c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:52:27.447992 kubelet[2807]: E0124 00:52:27.447746 2807 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dbqvx" podUID="f3c105a9-c51e-4146-8f7b-cd2b2b3fff5c" Jan 24 00:52:30.489089 systemd[1]: Started sshd@12-172.239.50.209:22-68.220.241.50:46236.service - OpenSSH per-connection server daemon (68.220.241.50:46236). Jan 24 00:52:30.488000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.239.50.209:22-68.220.241.50:46236 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:52:30.491421 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 00:52:30.491482 kernel: audit: type=1130 audit(1769215950.488:780): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.239.50.209:22-68.220.241.50:46236 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:52:30.669000 audit[5068]: USER_ACCT pid=5068 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:30.670163 sshd[5068]: Accepted publickey for core from 68.220.241.50 port 46236 ssh2: RSA SHA256:IRjesLrG0hOviko3cFhvj9lmkvOjb9NLdxgdQ3AAleE Jan 24 00:52:30.672907 sshd-session[5068]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:52:30.670000 audit[5068]: CRED_ACQ pid=5068 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:30.681787 kernel: audit: type=1101 audit(1769215950.669:781): pid=5068 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:30.681855 kernel: audit: type=1103 audit(1769215950.670:782): pid=5068 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:30.679008 systemd-logind[1569]: New session 11 of user core. Jan 24 00:52:30.686433 kernel: audit: type=1006 audit(1769215950.671:783): pid=5068 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 24 00:52:30.686296 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 24 00:52:30.690939 kernel: audit: type=1300 audit(1769215950.671:783): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff8f79cac0 a2=3 a3=0 items=0 ppid=1 pid=5068 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:52:30.671000 audit[5068]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff8f79cac0 a2=3 a3=0 items=0 ppid=1 pid=5068 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:52:30.671000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:52:30.692000 audit[5068]: USER_START pid=5068 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:30.705083 kernel: audit: type=1327 audit(1769215950.671:783): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:52:30.705128 kernel: audit: type=1105 audit(1769215950.692:784): pid=5068 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:30.695000 audit[5072]: CRED_ACQ pid=5072 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:30.713559 kernel: audit: type=1103 audit(1769215950.695:785): pid=5072 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:30.814134 sshd[5072]: Connection closed by 68.220.241.50 port 46236 Jan 24 00:52:30.814682 sshd-session[5068]: pam_unix(sshd:session): session closed for user core Jan 24 00:52:30.815000 audit[5068]: USER_END pid=5068 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:30.815000 audit[5068]: CRED_DISP pid=5068 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:30.828027 kernel: audit: type=1106 audit(1769215950.815:786): pid=5068 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:30.828089 kernel: audit: type=1104 audit(1769215950.815:787): pid=5068 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:30.828741 systemd[1]: sshd@12-172.239.50.209:22-68.220.241.50:46236.service: Deactivated successfully. Jan 24 00:52:30.832357 systemd[1]: session-11.scope: Deactivated successfully. Jan 24 00:52:30.828000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.239.50.209:22-68.220.241.50:46236 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:52:30.845305 systemd-logind[1569]: Session 11 logged out. Waiting for processes to exit. Jan 24 00:52:30.847164 systemd[1]: Started sshd@13-172.239.50.209:22-68.220.241.50:46242.service - OpenSSH per-connection server daemon (68.220.241.50:46242). Jan 24 00:52:30.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.239.50.209:22-68.220.241.50:46242 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:52:30.849275 systemd-logind[1569]: Removed session 11. Jan 24 00:52:31.004000 audit[5085]: USER_ACCT pid=5085 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:31.005735 sshd[5085]: Accepted publickey for core from 68.220.241.50 port 46242 ssh2: RSA SHA256:IRjesLrG0hOviko3cFhvj9lmkvOjb9NLdxgdQ3AAleE Jan 24 00:52:31.006000 audit[5085]: CRED_ACQ pid=5085 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:31.006000 audit[5085]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffb70eff00 a2=3 a3=0 items=0 ppid=1 pid=5085 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:52:31.006000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:52:31.008068 sshd-session[5085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:52:31.013016 systemd-logind[1569]: New session 12 of user core. Jan 24 00:52:31.021073 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 24 00:52:31.023000 audit[5085]: USER_START pid=5085 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:31.026000 audit[5089]: CRED_ACQ pid=5089 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:31.189690 sshd[5089]: Connection closed by 68.220.241.50 port 46242 Jan 24 00:52:31.192875 sshd-session[5085]: pam_unix(sshd:session): session closed for user core Jan 24 00:52:31.195000 audit[5085]: USER_END pid=5085 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:31.195000 audit[5085]: CRED_DISP pid=5085 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:31.199072 systemd[1]: sshd@13-172.239.50.209:22-68.220.241.50:46242.service: Deactivated successfully. Jan 24 00:52:31.198000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.239.50.209:22-68.220.241.50:46242 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:52:31.202299 systemd[1]: session-12.scope: Deactivated successfully. Jan 24 00:52:31.203969 systemd-logind[1569]: Session 12 logged out. Waiting for processes to exit. Jan 24 00:52:31.206712 systemd-logind[1569]: Removed session 12. Jan 24 00:52:31.224790 systemd[1]: Started sshd@14-172.239.50.209:22-68.220.241.50:46254.service - OpenSSH per-connection server daemon (68.220.241.50:46254). Jan 24 00:52:31.224000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.239.50.209:22-68.220.241.50:46254 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:52:31.391000 audit[5099]: USER_ACCT pid=5099 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:31.393169 sshd[5099]: Accepted publickey for core from 68.220.241.50 port 46254 ssh2: RSA SHA256:IRjesLrG0hOviko3cFhvj9lmkvOjb9NLdxgdQ3AAleE Jan 24 00:52:31.392000 audit[5099]: CRED_ACQ pid=5099 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:31.392000 audit[5099]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc6876d980 a2=3 a3=0 items=0 ppid=1 pid=5099 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:52:31.392000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:52:31.394477 sshd-session[5099]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:52:31.401032 systemd-logind[1569]: New session 13 of user core. Jan 24 00:52:31.407679 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 24 00:52:31.411000 audit[5099]: USER_START pid=5099 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:31.413000 audit[5103]: CRED_ACQ pid=5103 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:31.551890 sshd[5103]: Connection closed by 68.220.241.50 port 46254 Jan 24 00:52:31.552394 sshd-session[5099]: pam_unix(sshd:session): session closed for user core Jan 24 00:52:31.554000 audit[5099]: USER_END pid=5099 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:31.555000 audit[5099]: CRED_DISP pid=5099 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:31.558490 systemd[1]: sshd@14-172.239.50.209:22-68.220.241.50:46254.service: Deactivated successfully. Jan 24 00:52:31.558000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.239.50.209:22-68.220.241.50:46254 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:52:31.562814 systemd[1]: session-13.scope: Deactivated successfully. Jan 24 00:52:31.567743 systemd-logind[1569]: Session 13 logged out. Waiting for processes to exit. Jan 24 00:52:31.569492 systemd-logind[1569]: Removed session 13. Jan 24 00:52:31.698948 containerd[1593]: time="2026-01-24T00:52:31.698148133Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:52:31.849771 containerd[1593]: time="2026-01-24T00:52:31.849708327Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:52:31.850972 containerd[1593]: time="2026-01-24T00:52:31.850912957Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:52:31.851026 containerd[1593]: time="2026-01-24T00:52:31.851004877Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 24 00:52:31.851871 kubelet[2807]: E0124 00:52:31.851198 2807 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:52:31.851871 kubelet[2807]: E0124 00:52:31.851867 2807 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:52:31.852697 kubelet[2807]: E0124 00:52:31.851939 2807 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-7b94bdbd9d-dvkkh_calico-system(8e3373a2-955e-4988-8f5b-bfdd560d0dba): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:52:31.852697 kubelet[2807]: E0124 00:52:31.851971 2807 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b94bdbd9d-dvkkh" podUID="8e3373a2-955e-4988-8f5b-bfdd560d0dba" Jan 24 00:52:36.588120 systemd[1]: Started sshd@15-172.239.50.209:22-68.220.241.50:56458.service - OpenSSH per-connection server daemon (68.220.241.50:56458). Jan 24 00:52:36.598372 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 24 00:52:36.598439 kernel: audit: type=1130 audit(1769215956.589:807): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.239.50.209:22-68.220.241.50:56458 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:52:36.589000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.239.50.209:22-68.220.241.50:56458 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:52:36.700014 kubelet[2807]: E0124 00:52:36.699973 2807 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-57bd667b79-zp7m5" podUID="6b257eba-5f3e-4c58-93ff-911ee6aa75db" Jan 24 00:52:36.763000 audit[5118]: USER_ACCT pid=5118 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:36.772674 kernel: audit: type=1101 audit(1769215956.763:808): pid=5118 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:36.772739 sshd[5118]: Accepted publickey for core from 68.220.241.50 port 56458 ssh2: RSA SHA256:IRjesLrG0hOviko3cFhvj9lmkvOjb9NLdxgdQ3AAleE Jan 24 00:52:36.774460 sshd-session[5118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:52:36.772000 audit[5118]: CRED_ACQ pid=5118 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:36.788330 kernel: audit: type=1103 audit(1769215956.772:809): pid=5118 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:36.788396 kernel: audit: type=1006 audit(1769215956.772:810): pid=5118 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jan 24 00:52:36.792394 kernel: audit: type=1300 audit(1769215956.772:810): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff371abaf0 a2=3 a3=0 items=0 ppid=1 pid=5118 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:52:36.772000 audit[5118]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff371abaf0 a2=3 a3=0 items=0 ppid=1 pid=5118 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:52:36.772000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:52:36.797019 kernel: audit: type=1327 audit(1769215956.772:810): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:52:36.800018 systemd-logind[1569]: New session 14 of user core. Jan 24 00:52:36.807424 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 24 00:52:36.812000 audit[5118]: USER_START pid=5118 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:36.824724 kernel: audit: type=1105 audit(1769215956.812:811): pid=5118 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:36.824777 kernel: audit: type=1103 audit(1769215956.823:812): pid=5122 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:36.823000 audit[5122]: CRED_ACQ pid=5122 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:36.949599 sshd[5122]: Connection closed by 68.220.241.50 port 56458 Jan 24 00:52:36.951949 sshd-session[5118]: pam_unix(sshd:session): session closed for user core Jan 24 00:52:36.952000 audit[5118]: USER_END pid=5118 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:36.955941 systemd-logind[1569]: Session 14 logged out. Waiting for processes to exit. Jan 24 00:52:36.957351 systemd[1]: sshd@15-172.239.50.209:22-68.220.241.50:56458.service: Deactivated successfully. Jan 24 00:52:36.961038 systemd[1]: session-14.scope: Deactivated successfully. Jan 24 00:52:36.962825 kernel: audit: type=1106 audit(1769215956.952:813): pid=5118 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:36.965081 systemd-logind[1569]: Removed session 14. Jan 24 00:52:36.952000 audit[5118]: CRED_DISP pid=5118 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:36.973819 kernel: audit: type=1104 audit(1769215956.952:814): pid=5118 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:36.958000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.239.50.209:22-68.220.241.50:56458 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:52:37.699083 containerd[1593]: time="2026-01-24T00:52:37.699041643Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:52:37.876994 containerd[1593]: time="2026-01-24T00:52:37.876934508Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:52:37.878186 containerd[1593]: time="2026-01-24T00:52:37.878132977Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:52:37.878267 containerd[1593]: time="2026-01-24T00:52:37.878155997Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 24 00:52:37.878483 kubelet[2807]: E0124 00:52:37.878423 2807 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:52:37.878483 kubelet[2807]: E0124 00:52:37.878465 2807 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:52:37.878982 kubelet[2807]: E0124 00:52:37.878538 2807 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-588dc98d7b-pw4d6_calico-apiserver(7db270eb-ad67-4a41-9877-6d27ca50c210): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:52:37.878982 kubelet[2807]: E0124 00:52:37.878567 2807 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-588dc98d7b-pw4d6" podUID="7db270eb-ad67-4a41-9877-6d27ca50c210" Jan 24 00:52:39.703825 kubelet[2807]: E0124 00:52:39.703047 2807 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dbqvx" podUID="f3c105a9-c51e-4146-8f7b-cd2b2b3fff5c" Jan 24 00:52:41.698450 kubelet[2807]: E0124 00:52:41.698259 2807 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-588dc98d7b-7slqw" podUID="7daaf6bf-447d-4008-8095-834f38ae42be" Jan 24 00:52:41.987400 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 00:52:41.987502 kernel: audit: type=1130 audit(1769215961.981:816): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.239.50.209:22-68.220.241.50:56464 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:52:41.981000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.239.50.209:22-68.220.241.50:56464 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:52:41.982080 systemd[1]: Started sshd@16-172.239.50.209:22-68.220.241.50:56464.service - OpenSSH per-connection server daemon (68.220.241.50:56464). Jan 24 00:52:42.139000 audit[5134]: USER_ACCT pid=5134 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:42.140731 sshd[5134]: Accepted publickey for core from 68.220.241.50 port 56464 ssh2: RSA SHA256:IRjesLrG0hOviko3cFhvj9lmkvOjb9NLdxgdQ3AAleE Jan 24 00:52:42.142899 sshd-session[5134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:52:42.140000 audit[5134]: CRED_ACQ pid=5134 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:42.149210 kernel: audit: type=1101 audit(1769215962.139:817): pid=5134 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:42.149281 kernel: audit: type=1103 audit(1769215962.140:818): pid=5134 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:42.155860 kernel: audit: type=1006 audit(1769215962.140:819): pid=5134 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jan 24 00:52:42.158351 systemd-logind[1569]: New session 15 of user core. Jan 24 00:52:42.140000 audit[5134]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffee7071640 a2=3 a3=0 items=0 ppid=1 pid=5134 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:52:42.161307 kernel: audit: type=1300 audit(1769215962.140:819): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffee7071640 a2=3 a3=0 items=0 ppid=1 pid=5134 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:52:42.140000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:52:42.169159 kernel: audit: type=1327 audit(1769215962.140:819): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:52:42.172982 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 24 00:52:42.175000 audit[5134]: USER_START pid=5134 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:42.178000 audit[5138]: CRED_ACQ pid=5138 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:42.186787 kernel: audit: type=1105 audit(1769215962.175:820): pid=5134 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:42.186872 kernel: audit: type=1103 audit(1769215962.178:821): pid=5138 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:42.289921 sshd[5138]: Connection closed by 68.220.241.50 port 56464 Jan 24 00:52:42.290917 sshd-session[5134]: pam_unix(sshd:session): session closed for user core Jan 24 00:52:42.291000 audit[5134]: USER_END pid=5134 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:42.295947 systemd[1]: sshd@16-172.239.50.209:22-68.220.241.50:56464.service: Deactivated successfully. Jan 24 00:52:42.298391 systemd[1]: session-15.scope: Deactivated successfully. Jan 24 00:52:42.292000 audit[5134]: CRED_DISP pid=5134 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:42.302318 systemd-logind[1569]: Session 15 logged out. Waiting for processes to exit. Jan 24 00:52:42.302844 kernel: audit: type=1106 audit(1769215962.291:822): pid=5134 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:42.302895 kernel: audit: type=1104 audit(1769215962.292:823): pid=5134 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:42.303662 systemd-logind[1569]: Removed session 15. Jan 24 00:52:42.295000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.239.50.209:22-68.220.241.50:56464 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:52:42.701170 kubelet[2807]: E0124 00:52:42.699887 2807 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-rczw4" podUID="ed711851-0b36-4791-909e-763bb4210e62" Jan 24 00:52:44.701379 kubelet[2807]: E0124 00:52:44.701312 2807 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b94bdbd9d-dvkkh" podUID="8e3373a2-955e-4988-8f5b-bfdd560d0dba" Jan 24 00:52:47.327053 systemd[1]: Started sshd@17-172.239.50.209:22-68.220.241.50:39846.service - OpenSSH per-connection server daemon (68.220.241.50:39846). Jan 24 00:52:47.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.239.50.209:22-68.220.241.50:39846 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:52:47.327833 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 00:52:47.327891 kernel: audit: type=1130 audit(1769215967.326:825): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.239.50.209:22-68.220.241.50:39846 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:52:47.484000 audit[5149]: USER_ACCT pid=5149 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:47.494194 kernel: audit: type=1101 audit(1769215967.484:826): pid=5149 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:47.494282 sshd[5149]: Accepted publickey for core from 68.220.241.50 port 39846 ssh2: RSA SHA256:IRjesLrG0hOviko3cFhvj9lmkvOjb9NLdxgdQ3AAleE Jan 24 00:52:47.496165 sshd-session[5149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:52:47.494000 audit[5149]: CRED_ACQ pid=5149 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:47.504634 kernel: audit: type=1103 audit(1769215967.494:827): pid=5149 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:47.504668 kernel: audit: type=1006 audit(1769215967.494:828): pid=5149 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jan 24 00:52:47.494000 audit[5149]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff5ae4ac70 a2=3 a3=0 items=0 ppid=1 pid=5149 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:52:47.520994 kernel: audit: type=1300 audit(1769215967.494:828): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff5ae4ac70 a2=3 a3=0 items=0 ppid=1 pid=5149 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:52:47.521080 kernel: audit: type=1327 audit(1769215967.494:828): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:52:47.494000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:52:47.527028 systemd-logind[1569]: New session 16 of user core. Jan 24 00:52:47.530963 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 24 00:52:47.545980 kernel: audit: type=1105 audit(1769215967.534:829): pid=5149 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:47.534000 audit[5149]: USER_START pid=5149 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:47.545000 audit[5153]: CRED_ACQ pid=5153 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:47.553825 kernel: audit: type=1103 audit(1769215967.545:830): pid=5153 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:47.667495 sshd[5153]: Connection closed by 68.220.241.50 port 39846 Jan 24 00:52:47.668790 sshd-session[5149]: pam_unix(sshd:session): session closed for user core Jan 24 00:52:47.671000 audit[5149]: USER_END pid=5149 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:47.682885 kernel: audit: type=1106 audit(1769215967.671:831): pid=5149 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:47.683397 systemd-logind[1569]: Session 16 logged out. Waiting for processes to exit. Jan 24 00:52:47.684101 systemd[1]: sshd@17-172.239.50.209:22-68.220.241.50:39846.service: Deactivated successfully. Jan 24 00:52:47.687028 systemd[1]: session-16.scope: Deactivated successfully. Jan 24 00:52:47.700810 kernel: audit: type=1104 audit(1769215967.671:832): pid=5149 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:47.671000 audit[5149]: CRED_DISP pid=5149 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:47.683000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.239.50.209:22-68.220.241.50:39846 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:52:47.708939 systemd-logind[1569]: Removed session 16. Jan 24 00:52:47.712020 systemd[1]: Started sshd@18-172.239.50.209:22-68.220.241.50:39856.service - OpenSSH per-connection server daemon (68.220.241.50:39856). Jan 24 00:52:47.711000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.239.50.209:22-68.220.241.50:39856 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:52:47.886000 audit[5165]: USER_ACCT pid=5165 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:47.888055 sshd[5165]: Accepted publickey for core from 68.220.241.50 port 39856 ssh2: RSA SHA256:IRjesLrG0hOviko3cFhvj9lmkvOjb9NLdxgdQ3AAleE Jan 24 00:52:47.887000 audit[5165]: CRED_ACQ pid=5165 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:47.887000 audit[5165]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffefedaefe0 a2=3 a3=0 items=0 ppid=1 pid=5165 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:52:47.887000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:52:47.890344 sshd-session[5165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:52:47.896013 systemd-logind[1569]: New session 17 of user core. Jan 24 00:52:47.901991 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 24 00:52:47.908000 audit[5165]: USER_START pid=5165 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:47.910000 audit[5169]: CRED_ACQ pid=5169 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:48.161819 sshd[5169]: Connection closed by 68.220.241.50 port 39856 Jan 24 00:52:48.162642 sshd-session[5165]: pam_unix(sshd:session): session closed for user core Jan 24 00:52:48.164000 audit[5165]: USER_END pid=5165 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:48.164000 audit[5165]: CRED_DISP pid=5165 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:48.170003 systemd[1]: sshd@18-172.239.50.209:22-68.220.241.50:39856.service: Deactivated successfully. Jan 24 00:52:48.169000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.239.50.209:22-68.220.241.50:39856 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:52:48.170559 systemd-logind[1569]: Session 17 logged out. Waiting for processes to exit. Jan 24 00:52:48.176723 systemd[1]: session-17.scope: Deactivated successfully. Jan 24 00:52:48.180360 systemd-logind[1569]: Removed session 17. Jan 24 00:52:48.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.239.50.209:22-68.220.241.50:39872 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:52:48.200321 systemd[1]: Started sshd@19-172.239.50.209:22-68.220.241.50:39872.service - OpenSSH per-connection server daemon (68.220.241.50:39872). Jan 24 00:52:48.390000 audit[5180]: USER_ACCT pid=5180 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:48.391442 sshd[5180]: Accepted publickey for core from 68.220.241.50 port 39872 ssh2: RSA SHA256:IRjesLrG0hOviko3cFhvj9lmkvOjb9NLdxgdQ3AAleE Jan 24 00:52:48.391000 audit[5180]: CRED_ACQ pid=5180 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:48.391000 audit[5180]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcadd88440 a2=3 a3=0 items=0 ppid=1 pid=5180 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:52:48.391000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:52:48.394056 sshd-session[5180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:52:48.400934 systemd-logind[1569]: New session 18 of user core. Jan 24 00:52:48.404970 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 24 00:52:48.409000 audit[5180]: USER_START pid=5180 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:48.412000 audit[5184]: CRED_ACQ pid=5184 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:48.706399 kubelet[2807]: E0124 00:52:48.706065 2807 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-57bd667b79-zp7m5" podUID="6b257eba-5f3e-4c58-93ff-911ee6aa75db" Jan 24 00:52:49.043491 sshd[5184]: Connection closed by 68.220.241.50 port 39872 Jan 24 00:52:49.045083 sshd-session[5180]: pam_unix(sshd:session): session closed for user core Jan 24 00:52:49.046000 audit[5180]: USER_END pid=5180 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:49.047000 audit[5180]: CRED_DISP pid=5180 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:49.051570 systemd[1]: sshd@19-172.239.50.209:22-68.220.241.50:39872.service: Deactivated successfully. Jan 24 00:52:49.051000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.239.50.209:22-68.220.241.50:39872 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:52:49.055236 systemd[1]: session-18.scope: Deactivated successfully. Jan 24 00:52:49.057449 systemd-logind[1569]: Session 18 logged out. Waiting for processes to exit. Jan 24 00:52:49.059749 systemd-logind[1569]: Removed session 18. Jan 24 00:52:49.062000 audit[5196]: NETFILTER_CFG table=filter:138 family=2 entries=26 op=nft_register_rule pid=5196 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:52:49.062000 audit[5196]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffe5a82d500 a2=0 a3=7ffe5a82d4ec items=0 ppid=2920 pid=5196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:52:49.062000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:52:49.065000 audit[5196]: NETFILTER_CFG table=nat:139 family=2 entries=20 op=nft_register_rule pid=5196 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:52:49.065000 audit[5196]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe5a82d500 a2=0 a3=0 items=0 ppid=2920 pid=5196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:52:49.065000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:52:49.076059 systemd[1]: Started sshd@20-172.239.50.209:22-68.220.241.50:39886.service - OpenSSH per-connection server daemon (68.220.241.50:39886). Jan 24 00:52:49.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.239.50.209:22-68.220.241.50:39886 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:52:49.245000 audit[5201]: USER_ACCT pid=5201 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:49.250889 sshd[5201]: Accepted publickey for core from 68.220.241.50 port 39886 ssh2: RSA SHA256:IRjesLrG0hOviko3cFhvj9lmkvOjb9NLdxgdQ3AAleE Jan 24 00:52:49.250000 audit[5201]: CRED_ACQ pid=5201 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:49.250000 audit[5201]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc721ae910 a2=3 a3=0 items=0 ppid=1 pid=5201 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:52:49.250000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:52:49.255925 sshd-session[5201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:52:49.268147 systemd-logind[1569]: New session 19 of user core. Jan 24 00:52:49.274138 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 24 00:52:49.279000 audit[5201]: USER_START pid=5201 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:49.282000 audit[5205]: CRED_ACQ pid=5205 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:49.579831 sshd[5205]: Connection closed by 68.220.241.50 port 39886 Jan 24 00:52:49.580430 sshd-session[5201]: pam_unix(sshd:session): session closed for user core Jan 24 00:52:49.582000 audit[5201]: USER_END pid=5201 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:49.582000 audit[5201]: CRED_DISP pid=5201 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:49.586463 systemd[1]: sshd@20-172.239.50.209:22-68.220.241.50:39886.service: Deactivated successfully. Jan 24 00:52:49.585000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.239.50.209:22-68.220.241.50:39886 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:52:49.589448 systemd[1]: session-19.scope: Deactivated successfully. Jan 24 00:52:49.591911 systemd-logind[1569]: Session 19 logged out. Waiting for processes to exit. Jan 24 00:52:49.593939 systemd-logind[1569]: Removed session 19. Jan 24 00:52:49.613046 systemd[1]: Started sshd@21-172.239.50.209:22-68.220.241.50:39900.service - OpenSSH per-connection server daemon (68.220.241.50:39900). Jan 24 00:52:49.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.239.50.209:22-68.220.241.50:39900 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:52:49.699098 kubelet[2807]: E0124 00:52:49.698766 2807 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-588dc98d7b-pw4d6" podUID="7db270eb-ad67-4a41-9877-6d27ca50c210" Jan 24 00:52:49.779000 audit[5215]: USER_ACCT pid=5215 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:49.781451 sshd[5215]: Accepted publickey for core from 68.220.241.50 port 39900 ssh2: RSA SHA256:IRjesLrG0hOviko3cFhvj9lmkvOjb9NLdxgdQ3AAleE Jan 24 00:52:49.781000 audit[5215]: CRED_ACQ pid=5215 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:49.781000 audit[5215]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc6e9b25e0 a2=3 a3=0 items=0 ppid=1 pid=5215 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:52:49.781000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:52:49.785102 sshd-session[5215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:52:49.793821 systemd-logind[1569]: New session 20 of user core. Jan 24 00:52:49.800122 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 24 00:52:49.804000 audit[5215]: USER_START pid=5215 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:49.807000 audit[5219]: CRED_ACQ pid=5219 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:49.943568 sshd[5219]: Connection closed by 68.220.241.50 port 39900 Jan 24 00:52:49.946193 sshd-session[5215]: pam_unix(sshd:session): session closed for user core Jan 24 00:52:49.946000 audit[5215]: USER_END pid=5215 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:49.947000 audit[5215]: CRED_DISP pid=5215 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:49.950567 systemd-logind[1569]: Session 20 logged out. Waiting for processes to exit. Jan 24 00:52:49.951753 systemd[1]: sshd@21-172.239.50.209:22-68.220.241.50:39900.service: Deactivated successfully. Jan 24 00:52:49.951000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.239.50.209:22-68.220.241.50:39900 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:52:49.955293 systemd[1]: session-20.scope: Deactivated successfully. Jan 24 00:52:49.960438 systemd-logind[1569]: Removed session 20. Jan 24 00:52:50.102000 audit[5230]: NETFILTER_CFG table=filter:140 family=2 entries=38 op=nft_register_rule pid=5230 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:52:50.102000 audit[5230]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffc676e8f50 a2=0 a3=7ffc676e8f3c items=0 ppid=2920 pid=5230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:52:50.102000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:52:50.109000 audit[5230]: NETFILTER_CFG table=nat:141 family=2 entries=20 op=nft_register_rule pid=5230 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:52:50.109000 audit[5230]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc676e8f50 a2=0 a3=0 items=0 ppid=2920 pid=5230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:52:50.109000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:52:51.697836 kubelet[2807]: E0124 00:52:51.697514 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 24 00:52:53.481860 kernel: kauditd_printk_skb: 57 callbacks suppressed Jan 24 00:52:53.482042 kernel: audit: type=1325 audit(1769215973.478:874): table=filter:142 family=2 entries=26 op=nft_register_rule pid=5232 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:52:53.478000 audit[5232]: NETFILTER_CFG table=filter:142 family=2 entries=26 op=nft_register_rule pid=5232 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:52:53.478000 audit[5232]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff7ab808c0 a2=0 a3=7fff7ab808ac items=0 ppid=2920 pid=5232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:52:53.478000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:52:53.519467 kernel: audit: type=1300 audit(1769215973.478:874): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff7ab808c0 a2=0 a3=7fff7ab808ac items=0 ppid=2920 pid=5232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:52:53.519673 kernel: audit: type=1327 audit(1769215973.478:874): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:52:53.489000 audit[5232]: NETFILTER_CFG table=nat:143 family=2 entries=104 op=nft_register_chain pid=5232 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:52:53.523489 kernel: audit: type=1325 audit(1769215973.489:875): table=nat:143 family=2 entries=104 op=nft_register_chain pid=5232 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:52:53.527017 kernel: audit: type=1300 audit(1769215973.489:875): arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7fff7ab808c0 a2=0 a3=7fff7ab808ac items=0 ppid=2920 pid=5232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:52:53.489000 audit[5232]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7fff7ab808c0 a2=0 a3=7fff7ab808ac items=0 ppid=2920 pid=5232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:52:53.489000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:52:53.536502 kernel: audit: type=1327 audit(1769215973.489:875): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:52:54.701879 kubelet[2807]: E0124 00:52:54.701338 2807 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-588dc98d7b-7slqw" podUID="7daaf6bf-447d-4008-8095-834f38ae42be" Jan 24 00:52:54.711405 kubelet[2807]: E0124 00:52:54.711338 2807 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dbqvx" podUID="f3c105a9-c51e-4146-8f7b-cd2b2b3fff5c" Jan 24 00:52:54.982284 systemd[1]: Started sshd@22-172.239.50.209:22-68.220.241.50:37396.service - OpenSSH per-connection server daemon (68.220.241.50:37396). Jan 24 00:52:54.981000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.239.50.209:22-68.220.241.50:37396 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:52:54.993968 kernel: audit: type=1130 audit(1769215974.981:876): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.239.50.209:22-68.220.241.50:37396 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:52:55.151000 audit[5236]: USER_ACCT pid=5236 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:55.155281 sshd-session[5236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:52:55.156271 sshd[5236]: Accepted publickey for core from 68.220.241.50 port 37396 ssh2: RSA SHA256:IRjesLrG0hOviko3cFhvj9lmkvOjb9NLdxgdQ3AAleE Jan 24 00:52:55.153000 audit[5236]: CRED_ACQ pid=5236 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:55.162679 kernel: audit: type=1101 audit(1769215975.151:877): pid=5236 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:55.162723 kernel: audit: type=1103 audit(1769215975.153:878): pid=5236 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:55.164620 systemd-logind[1569]: New session 21 of user core. Jan 24 00:52:55.169495 kernel: audit: type=1006 audit(1769215975.153:879): pid=5236 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Jan 24 00:52:55.153000 audit[5236]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffd64af6a0 a2=3 a3=0 items=0 ppid=1 pid=5236 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:52:55.153000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:52:55.175001 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 24 00:52:55.177000 audit[5236]: USER_START pid=5236 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:55.179000 audit[5240]: CRED_ACQ pid=5240 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:55.295862 sshd[5240]: Connection closed by 68.220.241.50 port 37396 Jan 24 00:52:55.294068 sshd-session[5236]: pam_unix(sshd:session): session closed for user core Jan 24 00:52:55.297000 audit[5236]: USER_END pid=5236 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:55.297000 audit[5236]: CRED_DISP pid=5236 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:52:55.301640 systemd[1]: sshd@22-172.239.50.209:22-68.220.241.50:37396.service: Deactivated successfully. Jan 24 00:52:55.302120 systemd-logind[1569]: Session 21 logged out. Waiting for processes to exit. Jan 24 00:52:55.302000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.239.50.209:22-68.220.241.50:37396 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:52:55.307045 systemd[1]: session-21.scope: Deactivated successfully. Jan 24 00:52:55.311456 systemd-logind[1569]: Removed session 21. Jan 24 00:52:55.697618 kubelet[2807]: E0124 00:52:55.697324 2807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 24 00:52:55.699520 kubelet[2807]: E0124 00:52:55.699247 2807 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-rczw4" podUID="ed711851-0b36-4791-909e-763bb4210e62" Jan 24 00:52:59.700052 kubelet[2807]: E0124 00:52:59.699993 2807 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b94bdbd9d-dvkkh" podUID="8e3373a2-955e-4988-8f5b-bfdd560d0dba" Jan 24 00:53:00.320976 systemd[1]: Started sshd@23-172.239.50.209:22-68.220.241.50:37400.service - OpenSSH per-connection server daemon (68.220.241.50:37400). Jan 24 00:53:00.322436 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 24 00:53:00.324852 kernel: audit: type=1130 audit(1769215980.320:885): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.239.50.209:22-68.220.241.50:37400 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:00.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.239.50.209:22-68.220.241.50:37400 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:53:00.478000 audit[5276]: USER_ACCT pid=5276 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:53:00.480810 sshd[5276]: Accepted publickey for core from 68.220.241.50 port 37400 ssh2: RSA SHA256:IRjesLrG0hOviko3cFhvj9lmkvOjb9NLdxgdQ3AAleE Jan 24 00:53:00.482853 sshd-session[5276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:53:00.486826 kernel: audit: type=1101 audit(1769215980.478:886): pid=5276 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:53:00.502012 kernel: audit: type=1103 audit(1769215980.480:887): pid=5276 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:53:00.480000 audit[5276]: CRED_ACQ pid=5276 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:53:00.501556 systemd-logind[1569]: New session 22 of user core. Jan 24 00:53:00.480000 audit[5276]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe8ddd2e30 a2=3 a3=0 items=0 ppid=1 pid=5276 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:53:00.512271 kernel: audit: type=1006 audit(1769215980.480:888): pid=5276 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jan 24 00:53:00.512315 kernel: audit: type=1300 audit(1769215980.480:888): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe8ddd2e30 a2=3 a3=0 items=0 ppid=1 pid=5276 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:53:00.519487 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 24 00:53:00.480000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:53:00.526814 kernel: audit: type=1327 audit(1769215980.480:888): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:53:00.526000 audit[5276]: USER_START pid=5276 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:53:00.529000 audit[5280]: CRED_ACQ pid=5280 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:53:00.541078 kernel: audit: type=1105 audit(1769215980.526:889): pid=5276 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:53:00.541124 kernel: audit: type=1103 audit(1769215980.529:890): pid=5280 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:53:00.702007 kubelet[2807]: E0124 00:53:00.700895 2807 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-588dc98d7b-pw4d6" podUID="7db270eb-ad67-4a41-9877-6d27ca50c210" Jan 24 00:53:00.704835 sshd[5280]: Connection closed by 68.220.241.50 port 37400 Jan 24 00:53:00.703931 sshd-session[5276]: pam_unix(sshd:session): session closed for user core Jan 24 00:53:00.704000 audit[5276]: USER_END pid=5276 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:53:00.714311 systemd[1]: sshd@23-172.239.50.209:22-68.220.241.50:37400.service: Deactivated successfully. Jan 24 00:53:00.716829 kernel: audit: type=1106 audit(1769215980.704:891): pid=5276 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:53:00.717682 systemd[1]: session-22.scope: Deactivated successfully. Jan 24 00:53:00.705000 audit[5276]: CRED_DISP pid=5276 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:53:00.724179 systemd-logind[1569]: Session 22 logged out. Waiting for processes to exit. Jan 24 00:53:00.726075 systemd-logind[1569]: Removed session 22. Jan 24 00:53:00.730830 kernel: audit: type=1104 audit(1769215980.705:892): pid=5276 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:53:00.713000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.239.50.209:22-68.220.241.50:37400 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'