Jan 22 00:34:09.187479 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 21 22:02:49 -00 2026 Jan 22 00:34:09.187518 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=2c7ce323fe43e7b63a59c25601f0c418cba5a1d902eeaa4bfcebc579e79e52d2 Jan 22 00:34:09.187528 kernel: BIOS-provided physical RAM map: Jan 22 00:34:09.187535 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Jan 22 00:34:09.187541 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Jan 22 00:34:09.187547 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 22 00:34:09.187558 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Jan 22 00:34:09.187564 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Jan 22 00:34:09.187571 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 22 00:34:09.187577 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 22 00:34:09.187584 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 22 00:34:09.187591 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 22 00:34:09.187597 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Jan 22 00:34:09.187604 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 22 00:34:09.187614 kernel: NX (Execute Disable) protection: active Jan 22 00:34:09.187622 kernel: APIC: Static calls initialized Jan 22 00:34:09.187629 kernel: SMBIOS 2.8 present. Jan 22 00:34:09.187636 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Jan 22 00:34:09.187643 kernel: DMI: Memory slots populated: 1/1 Jan 22 00:34:09.187652 kernel: Hypervisor detected: KVM Jan 22 00:34:09.187659 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Jan 22 00:34:09.187666 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 22 00:34:09.187673 kernel: kvm-clock: using sched offset of 6120459591 cycles Jan 22 00:34:09.187681 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 22 00:34:09.187689 kernel: tsc: Detected 1999.999 MHz processor Jan 22 00:34:09.187696 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 22 00:34:09.187704 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 22 00:34:09.187714 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Jan 22 00:34:09.187721 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 22 00:34:09.187729 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 22 00:34:09.187736 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Jan 22 00:34:09.187743 kernel: Using GB pages for direct mapping Jan 22 00:34:09.187751 kernel: ACPI: Early table checksum verification disabled Jan 22 00:34:09.187758 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Jan 22 00:34:09.187766 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 22 00:34:09.187776 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 22 00:34:09.187783 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 22 00:34:09.187791 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jan 22 00:34:09.187798 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 22 00:34:09.187806 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 22 00:34:09.187817 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 22 00:34:09.187827 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 22 00:34:09.187835 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Jan 22 00:34:09.187843 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Jan 22 00:34:09.187851 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jan 22 00:34:09.187859 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Jan 22 00:34:09.187869 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Jan 22 00:34:09.187876 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Jan 22 00:34:09.187884 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Jan 22 00:34:09.187891 kernel: No NUMA configuration found Jan 22 00:34:09.187899 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Jan 22 00:34:09.187906 kernel: NODE_DATA(0) allocated [mem 0x17fff6dc0-0x17fffdfff] Jan 22 00:34:09.187914 kernel: Zone ranges: Jan 22 00:34:09.187922 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 22 00:34:09.187932 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 22 00:34:09.187939 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Jan 22 00:34:09.188098 kernel: Device empty Jan 22 00:34:09.188106 kernel: Movable zone start for each node Jan 22 00:34:09.188113 kernel: Early memory node ranges Jan 22 00:34:09.188120 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 22 00:34:09.188128 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Jan 22 00:34:09.188139 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Jan 22 00:34:09.188146 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Jan 22 00:34:09.188154 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 22 00:34:09.188162 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 22 00:34:09.188170 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Jan 22 00:34:09.188177 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 22 00:34:09.188185 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 22 00:34:09.188193 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 22 00:34:09.188266 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 22 00:34:09.188276 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 22 00:34:09.188283 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 22 00:34:09.188291 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 22 00:34:09.188347 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 22 00:34:09.188356 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 22 00:34:09.188364 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 22 00:34:09.188375 kernel: TSC deadline timer available Jan 22 00:34:09.188383 kernel: CPU topo: Max. logical packages: 1 Jan 22 00:34:09.188390 kernel: CPU topo: Max. logical dies: 1 Jan 22 00:34:09.188398 kernel: CPU topo: Max. dies per package: 1 Jan 22 00:34:09.188405 kernel: CPU topo: Max. threads per core: 1 Jan 22 00:34:09.188413 kernel: CPU topo: Num. cores per package: 2 Jan 22 00:34:09.188421 kernel: CPU topo: Num. threads per package: 2 Jan 22 00:34:09.188431 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jan 22 00:34:09.188439 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 22 00:34:09.188446 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 22 00:34:09.188454 kernel: kvm-guest: setup PV sched yield Jan 22 00:34:09.188461 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 22 00:34:09.188469 kernel: Booting paravirtualized kernel on KVM Jan 22 00:34:09.188477 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 22 00:34:09.188485 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 22 00:34:09.188495 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jan 22 00:34:09.188503 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jan 22 00:34:09.188525 kernel: pcpu-alloc: [0] 0 1 Jan 22 00:34:09.188533 kernel: kvm-guest: PV spinlocks enabled Jan 22 00:34:09.188541 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 22 00:34:09.188550 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=2c7ce323fe43e7b63a59c25601f0c418cba5a1d902eeaa4bfcebc579e79e52d2 Jan 22 00:34:09.188561 kernel: random: crng init done Jan 22 00:34:09.188568 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 22 00:34:09.188576 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 22 00:34:09.188584 kernel: Fallback order for Node 0: 0 Jan 22 00:34:09.188592 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 Jan 22 00:34:09.188599 kernel: Policy zone: Normal Jan 22 00:34:09.188607 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 22 00:34:09.188615 kernel: software IO TLB: area num 2. Jan 22 00:34:09.188625 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 22 00:34:09.188633 kernel: ftrace: allocating 40097 entries in 157 pages Jan 22 00:34:09.188641 kernel: ftrace: allocated 157 pages with 5 groups Jan 22 00:34:09.188650 kernel: Dynamic Preempt: voluntary Jan 22 00:34:09.188663 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 22 00:34:09.188676 kernel: rcu: RCU event tracing is enabled. Jan 22 00:34:09.188688 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 22 00:34:09.188704 kernel: Trampoline variant of Tasks RCU enabled. Jan 22 00:34:09.188717 kernel: Rude variant of Tasks RCU enabled. Jan 22 00:34:09.188726 kernel: Tracing variant of Tasks RCU enabled. Jan 22 00:34:09.188734 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 22 00:34:09.188741 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 22 00:34:09.188752 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 22 00:34:09.188768 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 22 00:34:09.188776 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 22 00:34:09.188784 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 22 00:34:09.188792 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 22 00:34:09.188802 kernel: Console: colour VGA+ 80x25 Jan 22 00:34:09.188810 kernel: printk: legacy console [tty0] enabled Jan 22 00:34:09.188818 kernel: printk: legacy console [ttyS0] enabled Jan 22 00:34:09.188826 kernel: ACPI: Core revision 20240827 Jan 22 00:34:09.188837 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 22 00:34:09.188845 kernel: APIC: Switch to symmetric I/O mode setup Jan 22 00:34:09.188853 kernel: x2apic enabled Jan 22 00:34:09.188861 kernel: APIC: Switched APIC routing to: physical x2apic Jan 22 00:34:09.188869 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 22 00:34:09.188877 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 22 00:34:09.188885 kernel: kvm-guest: setup PV IPIs Jan 22 00:34:09.188895 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 22 00:34:09.188903 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a85afc727, max_idle_ns: 881590685098 ns Jan 22 00:34:09.188911 kernel: Calibrating delay loop (skipped) preset value.. 3999.99 BogoMIPS (lpj=1999999) Jan 22 00:34:09.188919 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 22 00:34:09.188927 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 22 00:34:09.188935 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 22 00:34:09.188943 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 22 00:34:09.188953 kernel: Spectre V2 : Mitigation: Retpolines Jan 22 00:34:09.188961 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 22 00:34:09.188969 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 22 00:34:09.188977 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 22 00:34:09.188985 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 22 00:34:09.188993 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 22 00:34:09.189010 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 22 00:34:09.189023 kernel: active return thunk: srso_alias_return_thunk Jan 22 00:34:09.189036 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 22 00:34:09.189044 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 22 00:34:09.189052 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 22 00:34:09.189060 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 22 00:34:09.189068 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 22 00:34:09.189079 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 22 00:34:09.189087 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 22 00:34:09.189095 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 22 00:34:09.189103 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Jan 22 00:34:09.189111 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Jan 22 00:34:09.189119 kernel: Freeing SMP alternatives memory: 32K Jan 22 00:34:09.189126 kernel: pid_max: default: 32768 minimum: 301 Jan 22 00:34:09.189137 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 22 00:34:09.189145 kernel: landlock: Up and running. Jan 22 00:34:09.189153 kernel: SELinux: Initializing. Jan 22 00:34:09.189161 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 22 00:34:09.189169 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 22 00:34:09.189177 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 22 00:34:09.189185 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 22 00:34:09.189195 kernel: ... version: 0 Jan 22 00:34:09.189202 kernel: ... bit width: 48 Jan 22 00:34:09.189210 kernel: ... generic registers: 6 Jan 22 00:34:09.189218 kernel: ... value mask: 0000ffffffffffff Jan 22 00:34:09.189226 kernel: ... max period: 00007fffffffffff Jan 22 00:34:09.189233 kernel: ... fixed-purpose events: 0 Jan 22 00:34:09.189241 kernel: ... event mask: 000000000000003f Jan 22 00:34:09.189249 kernel: signal: max sigframe size: 3376 Jan 22 00:34:09.189259 kernel: rcu: Hierarchical SRCU implementation. Jan 22 00:34:09.189267 kernel: rcu: Max phase no-delay instances is 400. Jan 22 00:34:09.189275 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 22 00:34:09.189283 kernel: smp: Bringing up secondary CPUs ... Jan 22 00:34:09.189291 kernel: smpboot: x86: Booting SMP configuration: Jan 22 00:34:09.189298 kernel: .... node #0, CPUs: #1 Jan 22 00:34:09.189306 kernel: smp: Brought up 1 node, 2 CPUs Jan 22 00:34:09.189317 kernel: smpboot: Total of 2 processors activated (7999.99 BogoMIPS) Jan 22 00:34:09.189325 kernel: Memory: 3979480K/4193772K available (14336K kernel code, 2445K rwdata, 29896K rodata, 15436K init, 2604K bss, 208864K reserved, 0K cma-reserved) Jan 22 00:34:09.189333 kernel: devtmpfs: initialized Jan 22 00:34:09.189340 kernel: x86/mm: Memory block size: 128MB Jan 22 00:34:09.189348 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 22 00:34:09.189356 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 22 00:34:09.189364 kernel: pinctrl core: initialized pinctrl subsystem Jan 22 00:34:09.189374 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 22 00:34:09.189382 kernel: audit: initializing netlink subsys (disabled) Jan 22 00:34:09.189390 kernel: audit: type=2000 audit(1769042045.643:1): state=initialized audit_enabled=0 res=1 Jan 22 00:34:09.189398 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 22 00:34:09.189406 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 22 00:34:09.189413 kernel: cpuidle: using governor menu Jan 22 00:34:09.189426 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 22 00:34:09.189442 kernel: dca service started, version 1.12.1 Jan 22 00:34:09.189455 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jan 22 00:34:09.189469 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 22 00:34:09.189478 kernel: PCI: Using configuration type 1 for base access Jan 22 00:34:09.189486 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 22 00:34:09.189494 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 22 00:34:09.189502 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 22 00:34:09.189729 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 22 00:34:09.189737 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 22 00:34:09.189745 kernel: ACPI: Added _OSI(Module Device) Jan 22 00:34:09.189753 kernel: ACPI: Added _OSI(Processor Device) Jan 22 00:34:09.189760 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 22 00:34:09.189768 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 22 00:34:09.189776 kernel: ACPI: Interpreter enabled Jan 22 00:34:09.189786 kernel: ACPI: PM: (supports S0 S3 S5) Jan 22 00:34:09.189793 kernel: ACPI: Using IOAPIC for interrupt routing Jan 22 00:34:09.189801 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 22 00:34:09.189809 kernel: PCI: Using E820 reservations for host bridge windows Jan 22 00:34:09.189817 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 22 00:34:09.189824 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 22 00:34:09.190070 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 22 00:34:09.190265 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 22 00:34:09.190449 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 22 00:34:09.190460 kernel: PCI host bridge to bus 0000:00 Jan 22 00:34:09.190758 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 22 00:34:09.190927 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 22 00:34:09.191095 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 22 00:34:09.191256 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jan 22 00:34:09.191415 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 22 00:34:09.191597 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Jan 22 00:34:09.191761 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 22 00:34:09.191954 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 22 00:34:09.192145 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 22 00:34:09.192322 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jan 22 00:34:09.192495 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jan 22 00:34:09.192703 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jan 22 00:34:09.193015 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 22 00:34:09.193200 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Jan 22 00:34:09.193380 kernel: pci 0000:00:02.0: BAR 0 [io 0xc000-0xc03f] Jan 22 00:34:09.193660 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jan 22 00:34:09.193845 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jan 22 00:34:09.194215 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 22 00:34:09.194391 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Jan 22 00:34:09.194593 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jan 22 00:34:09.194771 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jan 22 00:34:09.194944 kernel: pci 0000:00:03.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jan 22 00:34:09.195179 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 22 00:34:09.195602 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 22 00:34:09.195790 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 22 00:34:09.195972 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0c0-0xc0df] Jan 22 00:34:09.196417 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd3000-0xfebd3fff] Jan 22 00:34:09.196627 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 22 00:34:09.196806 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jan 22 00:34:09.196817 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 22 00:34:09.196829 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 22 00:34:09.196838 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 22 00:34:09.196846 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 22 00:34:09.196854 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 22 00:34:09.196862 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 22 00:34:09.196870 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 22 00:34:09.196878 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 22 00:34:09.196888 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 22 00:34:09.196897 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 22 00:34:09.196905 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 22 00:34:09.196913 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 22 00:34:09.196921 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 22 00:34:09.196929 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 22 00:34:09.196937 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 22 00:34:09.196947 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 22 00:34:09.196956 kernel: iommu: Default domain type: Translated Jan 22 00:34:09.196964 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 22 00:34:09.196972 kernel: PCI: Using ACPI for IRQ routing Jan 22 00:34:09.196980 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 22 00:34:09.196988 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Jan 22 00:34:09.196996 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Jan 22 00:34:09.197172 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 22 00:34:09.197345 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 22 00:34:09.197534 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 22 00:34:09.197546 kernel: vgaarb: loaded Jan 22 00:34:09.197555 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 22 00:34:09.197563 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 22 00:34:09.197571 kernel: clocksource: Switched to clocksource kvm-clock Jan 22 00:34:09.197583 kernel: VFS: Disk quotas dquot_6.6.0 Jan 22 00:34:09.197591 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 22 00:34:09.197599 kernel: pnp: PnP ACPI init Jan 22 00:34:09.197966 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 22 00:34:09.197981 kernel: pnp: PnP ACPI: found 5 devices Jan 22 00:34:09.197990 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 22 00:34:09.197998 kernel: NET: Registered PF_INET protocol family Jan 22 00:34:09.198010 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 22 00:34:09.198018 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 22 00:34:09.198027 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 22 00:34:09.198035 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 22 00:34:09.198043 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 22 00:34:09.198051 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 22 00:34:09.198059 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 22 00:34:09.198069 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 22 00:34:09.198077 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 22 00:34:09.198086 kernel: NET: Registered PF_XDP protocol family Jan 22 00:34:09.198282 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 22 00:34:09.198447 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 22 00:34:09.198627 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 22 00:34:09.198821 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jan 22 00:34:09.198986 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 22 00:34:09.199146 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Jan 22 00:34:09.199157 kernel: PCI: CLS 0 bytes, default 64 Jan 22 00:34:09.199166 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 22 00:34:09.199174 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Jan 22 00:34:09.199182 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a85afc727, max_idle_ns: 881590685098 ns Jan 22 00:34:09.199194 kernel: Initialise system trusted keyrings Jan 22 00:34:09.199202 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 22 00:34:09.199210 kernel: Key type asymmetric registered Jan 22 00:34:09.199218 kernel: Asymmetric key parser 'x509' registered Jan 22 00:34:09.199226 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 22 00:34:09.199234 kernel: io scheduler mq-deadline registered Jan 22 00:34:09.199242 kernel: io scheduler kyber registered Jan 22 00:34:09.199253 kernel: io scheduler bfq registered Jan 22 00:34:09.199261 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 22 00:34:09.199269 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 22 00:34:09.199278 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 22 00:34:09.199286 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 22 00:34:09.199294 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 22 00:34:09.199303 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 22 00:34:09.199311 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 22 00:34:09.199321 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 22 00:34:09.199329 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 22 00:34:09.199505 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 22 00:34:09.199741 kernel: rtc_cmos 00:03: registered as rtc0 Jan 22 00:34:09.199965 kernel: rtc_cmos 00:03: setting system clock to 2026-01-22T00:34:07 UTC (1769042047) Jan 22 00:34:09.200139 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 22 00:34:09.200155 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 22 00:34:09.200163 kernel: NET: Registered PF_INET6 protocol family Jan 22 00:34:09.200171 kernel: Segment Routing with IPv6 Jan 22 00:34:09.200179 kernel: In-situ OAM (IOAM) with IPv6 Jan 22 00:34:09.200187 kernel: NET: Registered PF_PACKET protocol family Jan 22 00:34:09.200195 kernel: Key type dns_resolver registered Jan 22 00:34:09.200204 kernel: IPI shorthand broadcast: enabled Jan 22 00:34:09.200214 kernel: sched_clock: Marking stable (1824007534, 340990546)->(2262632930, -97634850) Jan 22 00:34:09.200222 kernel: registered taskstats version 1 Jan 22 00:34:09.200230 kernel: Loading compiled-in X.509 certificates Jan 22 00:34:09.200238 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 3c3e07c08e874e2a4bf964a0051bfd3618f8b847' Jan 22 00:34:09.200246 kernel: Demotion targets for Node 0: null Jan 22 00:34:09.200254 kernel: Key type .fscrypt registered Jan 22 00:34:09.200261 kernel: Key type fscrypt-provisioning registered Jan 22 00:34:09.200271 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 22 00:34:09.200279 kernel: ima: Allocated hash algorithm: sha1 Jan 22 00:34:09.200287 kernel: ima: No architecture policies found Jan 22 00:34:09.200295 kernel: clk: Disabling unused clocks Jan 22 00:34:09.200303 kernel: Freeing unused kernel image (initmem) memory: 15436K Jan 22 00:34:09.200311 kernel: Write protecting the kernel read-only data: 45056k Jan 22 00:34:09.200319 kernel: Freeing unused kernel image (rodata/data gap) memory: 824K Jan 22 00:34:09.200329 kernel: Run /init as init process Jan 22 00:34:09.200337 kernel: with arguments: Jan 22 00:34:09.200345 kernel: /init Jan 22 00:34:09.200353 kernel: with environment: Jan 22 00:34:09.200361 kernel: HOME=/ Jan 22 00:34:09.200384 kernel: TERM=linux Jan 22 00:34:09.200394 kernel: SCSI subsystem initialized Jan 22 00:34:09.200404 kernel: libata version 3.00 loaded. Jan 22 00:34:09.200599 kernel: ahci 0000:00:1f.2: version 3.0 Jan 22 00:34:09.200612 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 22 00:34:09.200786 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 22 00:34:09.200961 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 22 00:34:09.201136 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 22 00:34:09.201339 kernel: scsi host0: ahci Jan 22 00:34:09.201546 kernel: scsi host1: ahci Jan 22 00:34:09.201737 kernel: scsi host2: ahci Jan 22 00:34:09.201925 kernel: scsi host3: ahci Jan 22 00:34:09.202112 kernel: scsi host4: ahci Jan 22 00:34:09.202301 kernel: scsi host5: ahci Jan 22 00:34:09.202318 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 24 lpm-pol 1 Jan 22 00:34:09.202326 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 24 lpm-pol 1 Jan 22 00:34:09.202335 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 24 lpm-pol 1 Jan 22 00:34:09.202343 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 24 lpm-pol 1 Jan 22 00:34:09.202352 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 24 lpm-pol 1 Jan 22 00:34:09.202360 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 24 lpm-pol 1 Jan 22 00:34:09.202370 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 22 00:34:09.202384 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 22 00:34:09.202397 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 22 00:34:09.202411 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 22 00:34:09.202424 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 22 00:34:09.202437 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 22 00:34:09.203028 kernel: virtio_scsi virtio0: 2/0/0 default/read/poll queues Jan 22 00:34:09.203234 kernel: scsi host6: Virtio SCSI HBA Jan 22 00:34:09.203568 kernel: scsi 6:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jan 22 00:34:09.203866 kernel: sd 6:0:0:0: Power-on or device reset occurred Jan 22 00:34:09.204113 kernel: sd 6:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Jan 22 00:34:09.204310 kernel: sd 6:0:0:0: [sda] Write Protect is off Jan 22 00:34:09.204534 kernel: sd 6:0:0:0: [sda] Mode Sense: 63 00 00 08 Jan 22 00:34:09.204765 kernel: sd 6:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 22 00:34:09.204777 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 22 00:34:09.204786 kernel: GPT:25804799 != 167739391 Jan 22 00:34:09.204795 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 22 00:34:09.204803 kernel: GPT:25804799 != 167739391 Jan 22 00:34:09.204811 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 22 00:34:09.204822 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 22 00:34:09.205015 kernel: sd 6:0:0:0: [sda] Attached SCSI disk Jan 22 00:34:09.205027 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 22 00:34:09.205035 kernel: device-mapper: uevent: version 1.0.3 Jan 22 00:34:09.205043 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 22 00:34:09.205051 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 22 00:34:09.205060 kernel: raid6: avx2x4 gen() 23077 MB/s Jan 22 00:34:09.205071 kernel: raid6: avx2x2 gen() 22293 MB/s Jan 22 00:34:09.205079 kernel: raid6: avx2x1 gen() 13144 MB/s Jan 22 00:34:09.205088 kernel: raid6: using algorithm avx2x4 gen() 23077 MB/s Jan 22 00:34:09.205096 kernel: raid6: .... xor() 5186 MB/s, rmw enabled Jan 22 00:34:09.205106 kernel: raid6: using avx2x2 recovery algorithm Jan 22 00:34:09.205115 kernel: xor: automatically using best checksumming function avx Jan 22 00:34:09.205123 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 22 00:34:09.205131 kernel: BTRFS: device fsid 79986906-7858-40a3-90f5-bda7e594a44c devid 1 transid 34 /dev/mapper/usr (254:0) scanned by mount (167) Jan 22 00:34:09.205140 kernel: BTRFS info (device dm-0): first mount of filesystem 79986906-7858-40a3-90f5-bda7e594a44c Jan 22 00:34:09.205150 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 22 00:34:09.205158 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 22 00:34:09.205168 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 22 00:34:09.205176 kernel: BTRFS info (device dm-0): enabling free space tree Jan 22 00:34:09.205184 kernel: loop: module loaded Jan 22 00:34:09.205193 kernel: loop0: detected capacity change from 0 to 100160 Jan 22 00:34:09.205201 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 22 00:34:09.205210 systemd[1]: Successfully made /usr/ read-only. Jan 22 00:34:09.205221 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 22 00:34:09.205233 systemd[1]: Detected virtualization kvm. Jan 22 00:34:09.205242 systemd[1]: Detected architecture x86-64. Jan 22 00:34:09.205250 systemd[1]: Running in initrd. Jan 22 00:34:09.205258 systemd[1]: No hostname configured, using default hostname. Jan 22 00:34:09.205273 systemd[1]: Hostname set to . Jan 22 00:34:09.205288 systemd[1]: Initializing machine ID from random generator. Jan 22 00:34:09.205302 systemd[1]: Queued start job for default target initrd.target. Jan 22 00:34:09.205311 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 22 00:34:09.205320 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 22 00:34:09.205329 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 22 00:34:09.205338 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 22 00:34:09.205347 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 22 00:34:09.205358 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 22 00:34:09.205367 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 22 00:34:09.205376 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 22 00:34:09.205384 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 22 00:34:09.205393 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 22 00:34:09.205404 systemd[1]: Reached target paths.target - Path Units. Jan 22 00:34:09.205422 systemd[1]: Reached target slices.target - Slice Units. Jan 22 00:34:09.205435 systemd[1]: Reached target swap.target - Swaps. Jan 22 00:34:09.205444 systemd[1]: Reached target timers.target - Timer Units. Jan 22 00:34:09.205452 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 22 00:34:09.205461 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 22 00:34:09.205470 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 22 00:34:09.205478 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 22 00:34:09.205489 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 22 00:34:09.205498 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 22 00:34:09.205522 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 22 00:34:09.205546 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 22 00:34:09.205554 systemd[1]: Reached target sockets.target - Socket Units. Jan 22 00:34:09.205563 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 22 00:34:09.205572 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 22 00:34:09.205584 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 22 00:34:09.205593 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 22 00:34:09.205602 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 22 00:34:09.205612 systemd[1]: Starting systemd-fsck-usr.service... Jan 22 00:34:09.205621 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 22 00:34:09.205629 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 22 00:34:09.205641 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 22 00:34:09.205650 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 22 00:34:09.205659 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 22 00:34:09.205667 systemd[1]: Finished systemd-fsck-usr.service. Jan 22 00:34:09.205679 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 22 00:34:09.205713 systemd-journald[303]: Collecting audit messages is enabled. Jan 22 00:34:09.205733 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 22 00:34:09.205747 systemd-journald[303]: Journal started Jan 22 00:34:09.205765 systemd-journald[303]: Runtime Journal (/run/log/journal/0b33c38ec5334e9ba706f46358fa03c7) is 8M, max 78.1M, 70.1M free. Jan 22 00:34:09.211972 kernel: Bridge firewalling registered Jan 22 00:34:09.212004 systemd[1]: Started systemd-journald.service - Journal Service. Jan 22 00:34:09.209692 systemd-modules-load[305]: Inserted module 'br_netfilter' Jan 22 00:34:09.214504 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 22 00:34:09.213000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:09.226630 kernel: audit: type=1130 audit(1769042049.213:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:09.311000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:09.319742 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 22 00:34:09.337357 kernel: audit: type=1130 audit(1769042049.311:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:09.337382 kernel: audit: type=1130 audit(1769042049.320:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:09.337395 kernel: audit: type=1130 audit(1769042049.329:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:09.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:09.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:09.328600 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 22 00:34:09.335646 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 22 00:34:09.344263 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 22 00:34:09.347345 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 22 00:34:09.351717 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 22 00:34:09.372185 systemd-tmpfiles[325]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 22 00:34:09.376997 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 22 00:34:09.386495 kernel: audit: type=1130 audit(1769042049.377:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:09.377000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:09.386679 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 22 00:34:09.390675 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 22 00:34:09.400651 kernel: audit: type=1130 audit(1769042049.391:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:09.391000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:09.399695 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 22 00:34:09.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:09.409550 kernel: audit: type=1130 audit(1769042049.401:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:09.409777 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 22 00:34:09.410000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:09.416643 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 22 00:34:09.425452 kernel: audit: type=1130 audit(1769042049.410:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:09.425472 kernel: audit: type=1334 audit(1769042049.413:10): prog-id=6 op=LOAD Jan 22 00:34:09.413000 audit: BPF prog-id=6 op=LOAD Jan 22 00:34:09.436472 dracut-cmdline[338]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=2c7ce323fe43e7b63a59c25601f0c418cba5a1d902eeaa4bfcebc579e79e52d2 Jan 22 00:34:09.481955 systemd-resolved[342]: Positive Trust Anchors: Jan 22 00:34:09.483142 systemd-resolved[342]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 22 00:34:09.483152 systemd-resolved[342]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 22 00:34:09.483202 systemd-resolved[342]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 22 00:34:09.506221 systemd-resolved[342]: Defaulting to hostname 'linux'. Jan 22 00:34:09.510000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:09.510264 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 22 00:34:09.511081 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 22 00:34:09.549533 kernel: Loading iSCSI transport class v2.0-870. Jan 22 00:34:09.564526 kernel: iscsi: registered transport (tcp) Jan 22 00:34:09.590093 kernel: iscsi: registered transport (qla4xxx) Jan 22 00:34:09.590154 kernel: QLogic iSCSI HBA Driver Jan 22 00:34:09.618699 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 22 00:34:09.656356 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 22 00:34:09.657000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:09.660034 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 22 00:34:09.713985 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 22 00:34:09.714000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:09.716669 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 22 00:34:09.720629 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 22 00:34:09.759040 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 22 00:34:09.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:09.761000 audit: BPF prog-id=7 op=LOAD Jan 22 00:34:09.761000 audit: BPF prog-id=8 op=LOAD Jan 22 00:34:09.763221 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 22 00:34:09.793440 systemd-udevd[584]: Using default interface naming scheme 'v257'. Jan 22 00:34:09.809356 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 22 00:34:09.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:09.813920 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 22 00:34:09.843071 dracut-pre-trigger[658]: rd.md=0: removing MD RAID activation Jan 22 00:34:09.845841 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 22 00:34:09.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:09.848000 audit: BPF prog-id=9 op=LOAD Jan 22 00:34:09.850483 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 22 00:34:09.880997 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 22 00:34:09.882000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:09.884704 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 22 00:34:09.902683 systemd-networkd[695]: lo: Link UP Jan 22 00:34:09.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:09.903780 systemd-networkd[695]: lo: Gained carrier Jan 22 00:34:09.904579 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 22 00:34:09.905419 systemd[1]: Reached target network.target - Network. Jan 22 00:34:09.985096 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 22 00:34:09.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:09.987308 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 22 00:34:10.111894 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 22 00:34:10.142118 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jan 22 00:34:10.309813 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jan 22 00:34:10.328118 kernel: cryptd: max_cpu_qlen set to 1000 Jan 22 00:34:10.336542 kernel: AES CTR mode by8 optimization enabled Jan 22 00:34:10.360624 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jan 22 00:34:10.381676 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 22 00:34:10.408537 disk-uuid[785]: Primary Header is updated. Jan 22 00:34:10.408537 disk-uuid[785]: Secondary Entries is updated. Jan 22 00:34:10.408537 disk-uuid[785]: Secondary Header is updated. Jan 22 00:34:10.417191 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 22 00:34:10.418815 systemd-networkd[695]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 22 00:34:10.418820 systemd-networkd[695]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 22 00:34:10.419796 systemd-networkd[695]: eth0: Link UP Jan 22 00:34:10.420038 systemd-networkd[695]: eth0: Gained carrier Jan 22 00:34:10.444000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:10.420049 systemd-networkd[695]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 22 00:34:10.440838 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 22 00:34:10.441491 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 22 00:34:10.445491 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 22 00:34:10.450726 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 22 00:34:10.607494 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 22 00:34:10.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:10.613261 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 22 00:34:10.613000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:10.615382 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 22 00:34:10.616384 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 22 00:34:10.618012 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 22 00:34:10.621478 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 22 00:34:10.642181 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 22 00:34:10.642000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:11.240611 systemd-networkd[695]: eth0: DHCPv4 address 172.232.4.171/24, gateway 172.232.4.1 acquired from 23.213.15.242 Jan 22 00:34:11.475835 systemd-networkd[695]: eth0: Gained IPv6LL Jan 22 00:34:11.480200 disk-uuid[790]: Warning: The kernel is still using the old partition table. Jan 22 00:34:11.480200 disk-uuid[790]: The new table will be used at the next reboot or after you Jan 22 00:34:11.480200 disk-uuid[790]: run partprobe(8) or kpartx(8) Jan 22 00:34:11.480200 disk-uuid[790]: The operation has completed successfully. Jan 22 00:34:11.487725 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 22 00:34:11.505987 kernel: kauditd_printk_skb: 16 callbacks suppressed Jan 22 00:34:11.506018 kernel: audit: type=1130 audit(1769042051.488:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:11.506032 kernel: audit: type=1131 audit(1769042051.488:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:11.488000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:11.488000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:11.487861 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 22 00:34:11.489732 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 22 00:34:11.530538 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (855) Jan 22 00:34:11.537577 kernel: BTRFS info (device sda6): first mount of filesystem 04d4f92e-e2f4-4570-a15f-a84e10359254 Jan 22 00:34:11.537603 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 22 00:34:11.544812 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 22 00:34:11.544839 kernel: BTRFS info (device sda6): turning on async discard Jan 22 00:34:11.544856 kernel: BTRFS info (device sda6): enabling free space tree Jan 22 00:34:11.555532 kernel: BTRFS info (device sda6): last unmount of filesystem 04d4f92e-e2f4-4570-a15f-a84e10359254 Jan 22 00:34:11.556311 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 22 00:34:11.565941 kernel: audit: type=1130 audit(1769042051.556:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:11.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:11.558128 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 22 00:34:11.685926 ignition[874]: Ignition 2.22.0 Jan 22 00:34:11.685943 ignition[874]: Stage: fetch-offline Jan 22 00:34:11.689147 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 22 00:34:11.699316 kernel: audit: type=1130 audit(1769042051.690:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:11.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:11.685982 ignition[874]: no configs at "/usr/lib/ignition/base.d" Jan 22 00:34:11.692610 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 22 00:34:11.685994 ignition[874]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 22 00:34:11.686078 ignition[874]: parsed url from cmdline: "" Jan 22 00:34:11.686082 ignition[874]: no config URL provided Jan 22 00:34:11.686088 ignition[874]: reading system config file "/usr/lib/ignition/user.ign" Jan 22 00:34:11.686099 ignition[874]: no config at "/usr/lib/ignition/user.ign" Jan 22 00:34:11.686104 ignition[874]: failed to fetch config: resource requires networking Jan 22 00:34:11.686384 ignition[874]: Ignition finished successfully Jan 22 00:34:11.738963 ignition[880]: Ignition 2.22.0 Jan 22 00:34:11.739503 ignition[880]: Stage: fetch Jan 22 00:34:11.739681 ignition[880]: no configs at "/usr/lib/ignition/base.d" Jan 22 00:34:11.739692 ignition[880]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 22 00:34:11.739787 ignition[880]: parsed url from cmdline: "" Jan 22 00:34:11.739791 ignition[880]: no config URL provided Jan 22 00:34:11.739815 ignition[880]: reading system config file "/usr/lib/ignition/user.ign" Jan 22 00:34:11.739824 ignition[880]: no config at "/usr/lib/ignition/user.ign" Jan 22 00:34:11.739845 ignition[880]: PUT http://169.254.169.254/v1/token: attempt #1 Jan 22 00:34:11.833720 ignition[880]: PUT result: OK Jan 22 00:34:11.833768 ignition[880]: GET http://169.254.169.254/v1/user-data: attempt #1 Jan 22 00:34:11.945664 ignition[880]: GET result: OK Jan 22 00:34:11.946499 ignition[880]: parsing config with SHA512: 4b4abfe46836f40f7c32f547b2e1faa47709f43bf5ec6e1b89c6d11ae7bce957cf6d5928bd0515a20b559cd5435b67a65cde28f8bf656d9e4976ba131b37be94 Jan 22 00:34:11.952930 unknown[880]: fetched base config from "system" Jan 22 00:34:11.952949 unknown[880]: fetched base config from "system" Jan 22 00:34:11.953416 ignition[880]: fetch: fetch complete Jan 22 00:34:11.952959 unknown[880]: fetched user config from "akamai" Jan 22 00:34:11.966054 kernel: audit: type=1130 audit(1769042051.956:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:11.956000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:11.953426 ignition[880]: fetch: fetch passed Jan 22 00:34:11.956322 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 22 00:34:11.953492 ignition[880]: Ignition finished successfully Jan 22 00:34:11.959670 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 22 00:34:11.988643 ignition[887]: Ignition 2.22.0 Jan 22 00:34:11.988657 ignition[887]: Stage: kargs Jan 22 00:34:11.988787 ignition[887]: no configs at "/usr/lib/ignition/base.d" Jan 22 00:34:11.988798 ignition[887]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 22 00:34:11.989653 ignition[887]: kargs: kargs passed Jan 22 00:34:12.001636 kernel: audit: type=1130 audit(1769042051.992:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:11.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:11.991863 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 22 00:34:11.989696 ignition[887]: Ignition finished successfully Jan 22 00:34:11.995647 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 22 00:34:12.024414 ignition[893]: Ignition 2.22.0 Jan 22 00:34:12.024429 ignition[893]: Stage: disks Jan 22 00:34:12.024575 ignition[893]: no configs at "/usr/lib/ignition/base.d" Jan 22 00:34:12.024587 ignition[893]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 22 00:34:12.025639 ignition[893]: disks: disks passed Jan 22 00:34:12.025679 ignition[893]: Ignition finished successfully Jan 22 00:34:12.031221 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 22 00:34:12.040796 kernel: audit: type=1130 audit(1769042052.031:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:12.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:12.032781 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 22 00:34:12.041534 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 22 00:34:12.043338 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 22 00:34:12.045217 systemd[1]: Reached target sysinit.target - System Initialization. Jan 22 00:34:12.046688 systemd[1]: Reached target basic.target - Basic System. Jan 22 00:34:12.049708 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 22 00:34:12.089424 systemd-fsck[901]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Jan 22 00:34:12.092030 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 22 00:34:12.103503 kernel: audit: type=1130 audit(1769042052.092:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:12.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:12.095351 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 22 00:34:12.216537 kernel: EXT4-fs (sda9): mounted filesystem 2fa3c08b-a48e-45e5-aeb3-7441bca9cf30 r/w with ordered data mode. Quota mode: none. Jan 22 00:34:12.216982 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 22 00:34:12.218199 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 22 00:34:12.220667 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 22 00:34:12.223594 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 22 00:34:12.226147 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 22 00:34:12.227305 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 22 00:34:12.227339 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 22 00:34:12.233585 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 22 00:34:12.236184 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 22 00:34:12.243528 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (909) Jan 22 00:34:12.251494 kernel: BTRFS info (device sda6): first mount of filesystem 04d4f92e-e2f4-4570-a15f-a84e10359254 Jan 22 00:34:12.251542 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 22 00:34:12.260540 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 22 00:34:12.260573 kernel: BTRFS info (device sda6): turning on async discard Jan 22 00:34:12.260587 kernel: BTRFS info (device sda6): enabling free space tree Jan 22 00:34:12.265286 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 22 00:34:12.319258 initrd-setup-root[933]: cut: /sysroot/etc/passwd: No such file or directory Jan 22 00:34:12.324436 initrd-setup-root[940]: cut: /sysroot/etc/group: No such file or directory Jan 22 00:34:12.329722 initrd-setup-root[947]: cut: /sysroot/etc/shadow: No such file or directory Jan 22 00:34:12.335675 initrd-setup-root[954]: cut: /sysroot/etc/gshadow: No such file or directory Jan 22 00:34:12.446526 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 22 00:34:12.447000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:12.451609 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 22 00:34:12.456414 kernel: audit: type=1130 audit(1769042052.447:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:12.464701 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 22 00:34:12.478546 kernel: BTRFS info (device sda6): last unmount of filesystem 04d4f92e-e2f4-4570-a15f-a84e10359254 Jan 22 00:34:12.507670 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 22 00:34:12.507000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:12.516538 kernel: audit: type=1130 audit(1769042052.507:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:12.520871 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 22 00:34:12.522153 ignition[1021]: INFO : Ignition 2.22.0 Jan 22 00:34:12.522153 ignition[1021]: INFO : Stage: mount Jan 22 00:34:12.523726 ignition[1021]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 22 00:34:12.523726 ignition[1021]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 22 00:34:12.523726 ignition[1021]: INFO : mount: mount passed Jan 22 00:34:12.523726 ignition[1021]: INFO : Ignition finished successfully Jan 22 00:34:12.526000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:12.525778 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 22 00:34:12.528669 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 22 00:34:12.564504 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 22 00:34:12.590543 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (1033) Jan 22 00:34:12.597605 kernel: BTRFS info (device sda6): first mount of filesystem 04d4f92e-e2f4-4570-a15f-a84e10359254 Jan 22 00:34:12.597658 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 22 00:34:12.602856 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 22 00:34:12.602896 kernel: BTRFS info (device sda6): turning on async discard Jan 22 00:34:12.607068 kernel: BTRFS info (device sda6): enabling free space tree Jan 22 00:34:12.609505 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 22 00:34:12.643576 ignition[1049]: INFO : Ignition 2.22.0 Jan 22 00:34:12.644857 ignition[1049]: INFO : Stage: files Jan 22 00:34:12.644857 ignition[1049]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 22 00:34:12.644857 ignition[1049]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 22 00:34:12.647999 ignition[1049]: DEBUG : files: compiled without relabeling support, skipping Jan 22 00:34:12.671212 ignition[1049]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 22 00:34:12.671212 ignition[1049]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 22 00:34:12.673788 ignition[1049]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 22 00:34:12.675227 ignition[1049]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 22 00:34:12.675227 ignition[1049]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 22 00:34:12.675142 unknown[1049]: wrote ssh authorized keys file for user: core Jan 22 00:34:12.678862 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 22 00:34:12.678862 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 22 00:34:12.855001 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 22 00:34:13.078013 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 22 00:34:13.079636 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 22 00:34:13.079636 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 22 00:34:13.079636 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 22 00:34:13.079636 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 22 00:34:13.079636 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 22 00:34:13.079636 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 22 00:34:13.079636 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 22 00:34:13.079636 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 22 00:34:13.088674 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 22 00:34:13.088674 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 22 00:34:13.088674 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 22 00:34:13.088674 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 22 00:34:13.088674 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 22 00:34:13.088674 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Jan 22 00:34:13.417836 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 22 00:34:13.974603 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 22 00:34:13.974603 ignition[1049]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 22 00:34:13.979463 ignition[1049]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 22 00:34:13.979463 ignition[1049]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 22 00:34:13.979463 ignition[1049]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 22 00:34:13.979463 ignition[1049]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 22 00:34:13.979463 ignition[1049]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 22 00:34:13.979463 ignition[1049]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 22 00:34:13.979463 ignition[1049]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 22 00:34:13.979463 ignition[1049]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Jan 22 00:34:13.979463 ignition[1049]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Jan 22 00:34:13.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:13.999018 ignition[1049]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 22 00:34:13.999018 ignition[1049]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 22 00:34:13.999018 ignition[1049]: INFO : files: files passed Jan 22 00:34:13.999018 ignition[1049]: INFO : Ignition finished successfully Jan 22 00:34:13.983215 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 22 00:34:13.985668 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 22 00:34:13.989669 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 22 00:34:14.010017 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 22 00:34:14.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:14.011568 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 22 00:34:14.016000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:14.020742 initrd-setup-root-after-ignition[1080]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 22 00:34:14.020742 initrd-setup-root-after-ignition[1080]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 22 00:34:14.025222 initrd-setup-root-after-ignition[1085]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 22 00:34:14.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:14.025030 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 22 00:34:14.026929 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 22 00:34:14.029691 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 22 00:34:14.104946 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 22 00:34:14.105153 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 22 00:34:14.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:14.106000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:14.107221 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 22 00:34:14.108921 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 22 00:34:14.111039 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 22 00:34:14.112148 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 22 00:34:14.160984 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 22 00:34:14.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:14.164824 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 22 00:34:14.190020 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 22 00:34:14.190434 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 22 00:34:14.191864 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 22 00:34:14.194701 systemd[1]: Stopped target timers.target - Timer Units. Jan 22 00:34:14.199000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:14.197109 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 22 00:34:14.197395 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 22 00:34:14.200179 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 22 00:34:14.201721 systemd[1]: Stopped target basic.target - Basic System. Jan 22 00:34:14.203758 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 22 00:34:14.205846 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 22 00:34:14.207914 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 22 00:34:14.209909 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 22 00:34:14.212209 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 22 00:34:14.214727 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 22 00:34:14.217229 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 22 00:34:14.219645 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 22 00:34:14.226000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:14.222119 systemd[1]: Stopped target swap.target - Swaps. Jan 22 00:34:14.224259 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 22 00:34:14.224614 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 22 00:34:14.227455 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 22 00:34:14.266000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:14.228851 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 22 00:34:14.269000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:14.263448 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 22 00:34:14.271000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:14.263872 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 22 00:34:14.265344 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 22 00:34:14.265559 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 22 00:34:14.268423 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 22 00:34:14.268703 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 22 00:34:14.282000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:14.269775 systemd[1]: ignition-files.service: Deactivated successfully. Jan 22 00:34:14.284000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:14.269889 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 22 00:34:14.289000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:14.274613 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 22 00:34:14.278874 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 22 00:34:14.280527 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 22 00:34:14.281543 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 22 00:34:14.283535 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 22 00:34:14.283715 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 22 00:34:14.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:14.307000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:14.285564 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 22 00:34:14.285733 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 22 00:34:14.304339 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 22 00:34:14.304474 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 22 00:34:14.321545 ignition[1105]: INFO : Ignition 2.22.0 Jan 22 00:34:14.321545 ignition[1105]: INFO : Stage: umount Jan 22 00:34:14.321545 ignition[1105]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 22 00:34:14.321545 ignition[1105]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 22 00:34:14.327000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:14.332000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:14.333710 ignition[1105]: INFO : umount: umount passed Jan 22 00:34:14.333710 ignition[1105]: INFO : Ignition finished successfully Jan 22 00:34:14.334000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:14.336000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:14.325290 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 22 00:34:14.338000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:14.325443 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 22 00:34:14.328708 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 22 00:34:14.328770 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 22 00:34:14.332790 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 22 00:34:14.332853 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 22 00:34:14.334700 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 22 00:34:14.334763 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 22 00:34:14.336643 systemd[1]: Stopped target network.target - Network. Jan 22 00:34:14.337491 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 22 00:34:14.338597 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 22 00:34:14.340633 systemd[1]: Stopped target paths.target - Path Units. Jan 22 00:34:14.372000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:14.341431 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 22 00:34:14.344631 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 22 00:34:14.376000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:14.345757 systemd[1]: Stopped target slices.target - Slice Units. Jan 22 00:34:14.348645 systemd[1]: Stopped target sockets.target - Socket Units. Jan 22 00:34:14.350115 systemd[1]: iscsid.socket: Deactivated successfully. Jan 22 00:34:14.350174 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 22 00:34:14.356675 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 22 00:34:14.356753 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 22 00:34:14.391000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:14.362816 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 22 00:34:14.362882 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 22 00:34:14.369749 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 22 00:34:14.369872 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 22 00:34:14.397000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:14.372723 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 22 00:34:14.372814 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 22 00:34:14.378256 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 22 00:34:14.401000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:14.380418 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 22 00:34:14.403000 audit: BPF prog-id=6 op=UNLOAD Jan 22 00:34:14.387878 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 22 00:34:14.403000 audit: BPF prog-id=9 op=UNLOAD Jan 22 00:34:14.389232 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 22 00:34:14.389429 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 22 00:34:14.395681 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 22 00:34:14.395895 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 22 00:34:14.409000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:14.399747 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 22 00:34:14.399926 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 22 00:34:14.403887 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 22 00:34:14.417000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:14.405491 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 22 00:34:14.419000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:14.405604 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 22 00:34:14.421000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:14.407464 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 22 00:34:14.407603 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 22 00:34:14.411661 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 22 00:34:14.415759 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 22 00:34:14.415868 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 22 00:34:14.417738 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 22 00:34:14.417822 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 22 00:34:14.419832 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 22 00:34:14.419916 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 22 00:34:14.421815 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 22 00:34:14.449587 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 22 00:34:14.450619 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 22 00:34:14.452000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:14.453762 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 22 00:34:14.453848 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 22 00:34:14.459000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:14.456234 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 22 00:34:14.456304 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 22 00:34:14.463000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:14.458019 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 22 00:34:14.466000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:14.458082 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 22 00:34:14.461170 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 22 00:34:14.471000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:14.461235 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 22 00:34:14.464850 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 22 00:34:14.475000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:14.464964 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 22 00:34:14.477000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:14.468298 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 22 00:34:14.506000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:14.471181 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 22 00:34:14.508000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:14.471276 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 22 00:34:14.510000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:14.472494 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 22 00:34:14.512000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:14.512000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:14.474662 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 22 00:34:14.475925 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 22 00:34:14.476000 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 22 00:34:14.478074 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 22 00:34:14.478142 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 22 00:34:14.506544 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 22 00:34:14.506635 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 22 00:34:14.509445 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 22 00:34:14.509590 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 22 00:34:14.511030 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 22 00:34:14.511142 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 22 00:34:14.513923 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 22 00:34:14.516718 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 22 00:34:14.536200 systemd[1]: Switching root. Jan 22 00:34:14.569319 systemd-journald[303]: Journal stopped Jan 22 00:34:15.820374 systemd-journald[303]: Received SIGTERM from PID 1 (systemd). Jan 22 00:34:15.820409 kernel: SELinux: policy capability network_peer_controls=1 Jan 22 00:34:15.820423 kernel: SELinux: policy capability open_perms=1 Jan 22 00:34:15.820434 kernel: SELinux: policy capability extended_socket_class=1 Jan 22 00:34:15.820444 kernel: SELinux: policy capability always_check_network=0 Jan 22 00:34:15.820457 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 22 00:34:15.820468 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 22 00:34:15.820478 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 22 00:34:15.820489 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 22 00:34:15.820499 kernel: SELinux: policy capability userspace_initial_context=0 Jan 22 00:34:15.821557 systemd[1]: Successfully loaded SELinux policy in 83.412ms. Jan 22 00:34:15.821581 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.521ms. Jan 22 00:34:15.821595 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 22 00:34:15.821606 systemd[1]: Detected virtualization kvm. Jan 22 00:34:15.821620 systemd[1]: Detected architecture x86-64. Jan 22 00:34:15.821631 systemd[1]: Detected first boot. Jan 22 00:34:15.821642 systemd[1]: Initializing machine ID from random generator. Jan 22 00:34:15.821654 zram_generator::config[1150]: No configuration found. Jan 22 00:34:15.821666 kernel: Guest personality initialized and is inactive Jan 22 00:34:15.821677 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 22 00:34:15.821690 kernel: Initialized host personality Jan 22 00:34:15.821701 kernel: NET: Registered PF_VSOCK protocol family Jan 22 00:34:15.821711 systemd[1]: Populated /etc with preset unit settings. Jan 22 00:34:15.821723 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 22 00:34:15.821735 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 22 00:34:15.821746 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 22 00:34:15.821762 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 22 00:34:15.821776 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 22 00:34:15.821787 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 22 00:34:15.821799 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 22 00:34:15.821810 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 22 00:34:15.821821 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 22 00:34:15.821835 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 22 00:34:15.821846 systemd[1]: Created slice user.slice - User and Session Slice. Jan 22 00:34:15.821857 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 22 00:34:15.821868 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 22 00:34:15.821879 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 22 00:34:15.821890 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 22 00:34:15.821901 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 22 00:34:15.821916 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 22 00:34:15.821930 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 22 00:34:15.821941 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 22 00:34:15.821952 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 22 00:34:15.821963 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 22 00:34:15.821974 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 22 00:34:15.821989 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 22 00:34:15.822001 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 22 00:34:15.822012 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 22 00:34:15.822023 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 22 00:34:15.822034 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 22 00:34:15.822046 systemd[1]: Reached target slices.target - Slice Units. Jan 22 00:34:15.822057 systemd[1]: Reached target swap.target - Swaps. Jan 22 00:34:15.822071 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 22 00:34:15.822082 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 22 00:34:15.822093 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 22 00:34:15.822105 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 22 00:34:15.822118 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 22 00:34:15.822129 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 22 00:34:15.822140 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 22 00:34:15.822151 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 22 00:34:15.822163 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 22 00:34:15.822174 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 22 00:34:15.822187 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 22 00:34:15.822198 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 22 00:34:15.822210 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 22 00:34:15.822221 systemd[1]: Mounting media.mount - External Media Directory... Jan 22 00:34:15.822233 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 22 00:34:15.822245 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 22 00:34:15.822256 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 22 00:34:15.822269 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 22 00:34:15.822281 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 22 00:34:15.822293 systemd[1]: Reached target machines.target - Containers. Jan 22 00:34:15.822304 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 22 00:34:15.822315 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 22 00:34:15.822327 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 22 00:34:15.822338 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 22 00:34:15.822352 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 22 00:34:15.822363 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 22 00:34:15.822374 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 22 00:34:15.822385 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 22 00:34:15.822396 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 22 00:34:15.822408 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 22 00:34:15.822422 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 22 00:34:15.822433 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 22 00:34:15.822444 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 22 00:34:15.822455 systemd[1]: Stopped systemd-fsck-usr.service. Jan 22 00:34:15.822467 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 22 00:34:15.822479 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 22 00:34:15.822491 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 22 00:34:15.822504 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 22 00:34:15.822558 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 22 00:34:15.822571 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 22 00:34:15.822583 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 22 00:34:15.822594 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 22 00:34:15.822605 kernel: ACPI: bus type drm_connector registered Jan 22 00:34:15.822620 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 22 00:34:15.822631 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 22 00:34:15.822665 systemd-journald[1242]: Collecting audit messages is enabled. Jan 22 00:34:15.822731 systemd[1]: Mounted media.mount - External Media Directory. Jan 22 00:34:15.822744 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 22 00:34:15.822756 systemd-journald[1242]: Journal started Jan 22 00:34:15.822776 systemd-journald[1242]: Runtime Journal (/run/log/journal/90a54b5797af470badc0db1724283e57) is 8M, max 78.1M, 70.1M free. Jan 22 00:34:15.529000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 22 00:34:15.714000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:15.720000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:15.728000 audit: BPF prog-id=14 op=UNLOAD Jan 22 00:34:15.728000 audit: BPF prog-id=13 op=UNLOAD Jan 22 00:34:15.729000 audit: BPF prog-id=15 op=LOAD Jan 22 00:34:15.729000 audit: BPF prog-id=16 op=LOAD Jan 22 00:34:15.729000 audit: BPF prog-id=17 op=LOAD Jan 22 00:34:15.811000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 22 00:34:15.811000 audit[1242]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffd8d340a00 a2=4000 a3=0 items=0 ppid=1 pid=1242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:15.811000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 22 00:34:15.391285 systemd[1]: Queued start job for default target multi-user.target. Jan 22 00:34:15.412284 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 22 00:34:15.413031 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 22 00:34:15.833852 systemd[1]: Started systemd-journald.service - Journal Service. Jan 22 00:34:15.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:15.830244 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 22 00:34:15.833393 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 22 00:34:15.835938 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 22 00:34:15.836000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:15.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:15.838580 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 22 00:34:15.839778 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 22 00:34:15.842000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:15.842000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:15.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:15.843000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:15.840927 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 22 00:34:15.842914 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 22 00:34:15.843108 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 22 00:34:15.845000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:15.846000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:15.844493 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 22 00:34:15.845805 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 22 00:34:15.846982 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 22 00:34:15.847174 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 22 00:34:15.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:15.848000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:15.848919 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 22 00:34:15.849418 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 22 00:34:15.850000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:15.850000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:15.852742 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 22 00:34:15.853000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:15.854636 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 22 00:34:15.859698 kernel: fuse: init (API version 7.41) Jan 22 00:34:15.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:15.860684 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 22 00:34:15.862000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:15.868337 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 22 00:34:15.868743 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 22 00:34:15.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:15.869000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:15.870375 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 22 00:34:15.870000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:15.880894 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 22 00:34:15.882851 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 22 00:34:15.883738 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 22 00:34:15.883826 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 22 00:34:15.885678 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 22 00:34:15.886774 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 22 00:34:15.887142 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 22 00:34:15.890661 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 22 00:34:15.892752 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 22 00:34:15.894622 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 22 00:34:15.904660 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 22 00:34:15.905452 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 22 00:34:15.910674 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 22 00:34:15.913647 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 22 00:34:15.916795 systemd-journald[1242]: Time spent on flushing to /var/log/journal/90a54b5797af470badc0db1724283e57 is 53.554ms for 1119 entries. Jan 22 00:34:15.916795 systemd-journald[1242]: System Journal (/var/log/journal/90a54b5797af470badc0db1724283e57) is 8M, max 588.1M, 580.1M free. Jan 22 00:34:15.984232 systemd-journald[1242]: Received client request to flush runtime journal. Jan 22 00:34:15.986267 kernel: loop1: detected capacity change from 0 to 111544 Jan 22 00:34:15.956000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:15.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:15.928438 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 22 00:34:15.955813 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 22 00:34:15.958350 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 22 00:34:15.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:15.964715 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 22 00:34:15.985028 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 22 00:34:15.991113 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 22 00:34:16.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:16.014626 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 22 00:34:16.027180 kernel: loop2: detected capacity change from 0 to 119256 Jan 22 00:34:16.026000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:16.026064 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 22 00:34:16.027063 systemd-tmpfiles[1275]: ACLs are not supported, ignoring. Jan 22 00:34:16.027076 systemd-tmpfiles[1275]: ACLs are not supported, ignoring. Jan 22 00:34:16.034310 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 22 00:34:16.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:16.039991 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 22 00:34:16.066540 kernel: loop3: detected capacity change from 0 to 219144 Jan 22 00:34:16.084867 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 22 00:34:16.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:16.086000 audit: BPF prog-id=18 op=LOAD Jan 22 00:34:16.087000 audit: BPF prog-id=19 op=LOAD Jan 22 00:34:16.087000 audit: BPF prog-id=20 op=LOAD Jan 22 00:34:16.091000 audit: BPF prog-id=21 op=LOAD Jan 22 00:34:16.090669 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 22 00:34:16.100628 kernel: loop4: detected capacity change from 0 to 8 Jan 22 00:34:16.094715 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 22 00:34:16.099697 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 22 00:34:16.108000 audit: BPF prog-id=22 op=LOAD Jan 22 00:34:16.108000 audit: BPF prog-id=23 op=LOAD Jan 22 00:34:16.108000 audit: BPF prog-id=24 op=LOAD Jan 22 00:34:16.109975 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 22 00:34:16.114000 audit: BPF prog-id=25 op=LOAD Jan 22 00:34:16.117000 audit: BPF prog-id=26 op=LOAD Jan 22 00:34:16.117000 audit: BPF prog-id=27 op=LOAD Jan 22 00:34:16.118824 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 22 00:34:16.122532 kernel: loop5: detected capacity change from 0 to 111544 Jan 22 00:34:16.149533 kernel: loop6: detected capacity change from 0 to 119256 Jan 22 00:34:16.171549 kernel: loop7: detected capacity change from 0 to 219144 Jan 22 00:34:16.176491 systemd-tmpfiles[1298]: ACLs are not supported, ignoring. Jan 22 00:34:16.180563 systemd-tmpfiles[1298]: ACLs are not supported, ignoring. Jan 22 00:34:16.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:16.192376 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 22 00:34:16.210534 kernel: loop1: detected capacity change from 0 to 8 Jan 22 00:34:16.215996 (sd-merge)[1301]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-akamai.raw'. Jan 22 00:34:16.223844 (sd-merge)[1301]: Merged extensions into '/usr'. Jan 22 00:34:16.229802 systemd-nsresourced[1300]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 22 00:34:16.232357 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 22 00:34:16.233000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:16.234450 systemd[1]: Reload requested from client PID 1274 ('systemd-sysext') (unit systemd-sysext.service)... Jan 22 00:34:16.234468 systemd[1]: Reloading... Jan 22 00:34:16.354571 zram_generator::config[1346]: No configuration found. Jan 22 00:34:16.402737 systemd-oomd[1295]: No swap; memory pressure usage will be degraded Jan 22 00:34:16.432950 systemd-resolved[1296]: Positive Trust Anchors: Jan 22 00:34:16.432963 systemd-resolved[1296]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 22 00:34:16.432969 systemd-resolved[1296]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 22 00:34:16.432997 systemd-resolved[1296]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 22 00:34:16.439713 systemd-resolved[1296]: Defaulting to hostname 'linux'. Jan 22 00:34:16.567753 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 22 00:34:16.568053 systemd[1]: Reloading finished in 333 ms. Jan 22 00:34:16.595486 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 22 00:34:16.597550 kernel: kauditd_printk_skb: 111 callbacks suppressed Jan 22 00:34:16.597629 kernel: audit: type=1130 audit(1769042056.595:146): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:16.595000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:16.596688 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 22 00:34:16.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:16.605188 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 22 00:34:16.611537 kernel: audit: type=1130 audit(1769042056.604:147): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:16.611000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:16.611988 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 22 00:34:16.618538 kernel: audit: type=1130 audit(1769042056.611:148): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:16.618000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:16.619455 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 22 00:34:16.625535 kernel: audit: type=1130 audit(1769042056.618:149): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:16.625000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:16.630677 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 22 00:34:16.632541 kernel: audit: type=1130 audit(1769042056.625:150): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:16.633886 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 22 00:34:16.635902 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 22 00:34:16.645721 systemd[1]: Starting ensure-sysext.service... Jan 22 00:34:16.670278 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 22 00:34:16.670000 audit: BPF prog-id=8 op=UNLOAD Jan 22 00:34:16.675026 kernel: audit: type=1334 audit(1769042056.670:151): prog-id=8 op=UNLOAD Jan 22 00:34:16.675080 kernel: audit: type=1334 audit(1769042056.670:152): prog-id=7 op=UNLOAD Jan 22 00:34:16.675105 kernel: audit: type=1334 audit(1769042056.671:153): prog-id=28 op=LOAD Jan 22 00:34:16.675134 kernel: audit: type=1334 audit(1769042056.671:154): prog-id=29 op=LOAD Jan 22 00:34:16.670000 audit: BPF prog-id=7 op=UNLOAD Jan 22 00:34:16.671000 audit: BPF prog-id=28 op=LOAD Jan 22 00:34:16.671000 audit: BPF prog-id=29 op=LOAD Jan 22 00:34:16.673653 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 22 00:34:16.694000 audit: BPF prog-id=30 op=LOAD Jan 22 00:34:16.699564 kernel: audit: type=1334 audit(1769042056.694:155): prog-id=30 op=LOAD Jan 22 00:34:16.694000 audit: BPF prog-id=21 op=UNLOAD Jan 22 00:34:16.695000 audit: BPF prog-id=31 op=LOAD Jan 22 00:34:16.695000 audit: BPF prog-id=15 op=UNLOAD Jan 22 00:34:16.695000 audit: BPF prog-id=32 op=LOAD Jan 22 00:34:16.695000 audit: BPF prog-id=33 op=LOAD Jan 22 00:34:16.695000 audit: BPF prog-id=16 op=UNLOAD Jan 22 00:34:16.695000 audit: BPF prog-id=17 op=UNLOAD Jan 22 00:34:16.698000 audit: BPF prog-id=34 op=LOAD Jan 22 00:34:16.699000 audit: BPF prog-id=22 op=UNLOAD Jan 22 00:34:16.700000 audit: BPF prog-id=35 op=LOAD Jan 22 00:34:16.700000 audit: BPF prog-id=36 op=LOAD Jan 22 00:34:16.700000 audit: BPF prog-id=23 op=UNLOAD Jan 22 00:34:16.700000 audit: BPF prog-id=24 op=UNLOAD Jan 22 00:34:16.702000 audit: BPF prog-id=37 op=LOAD Jan 22 00:34:16.702000 audit: BPF prog-id=25 op=UNLOAD Jan 22 00:34:16.702000 audit: BPF prog-id=38 op=LOAD Jan 22 00:34:16.702000 audit: BPF prog-id=39 op=LOAD Jan 22 00:34:16.702000 audit: BPF prog-id=26 op=UNLOAD Jan 22 00:34:16.702000 audit: BPF prog-id=27 op=UNLOAD Jan 22 00:34:16.703000 audit: BPF prog-id=40 op=LOAD Jan 22 00:34:16.704604 systemd-tmpfiles[1392]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 22 00:34:16.704901 systemd-tmpfiles[1392]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 22 00:34:16.705263 systemd-tmpfiles[1392]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 22 00:34:16.704000 audit: BPF prog-id=18 op=UNLOAD Jan 22 00:34:16.704000 audit: BPF prog-id=41 op=LOAD Jan 22 00:34:16.704000 audit: BPF prog-id=42 op=LOAD Jan 22 00:34:16.704000 audit: BPF prog-id=19 op=UNLOAD Jan 22 00:34:16.704000 audit: BPF prog-id=20 op=UNLOAD Jan 22 00:34:16.706813 systemd-tmpfiles[1392]: ACLs are not supported, ignoring. Jan 22 00:34:16.706938 systemd-tmpfiles[1392]: ACLs are not supported, ignoring. Jan 22 00:34:16.707805 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 22 00:34:16.708916 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 22 00:34:16.713850 systemd[1]: Reload requested from client PID 1391 ('systemctl') (unit ensure-sysext.service)... Jan 22 00:34:16.713872 systemd[1]: Reloading... Jan 22 00:34:16.719026 systemd-tmpfiles[1392]: Detected autofs mount point /boot during canonicalization of boot. Jan 22 00:34:16.719104 systemd-tmpfiles[1392]: Skipping /boot Jan 22 00:34:16.741043 systemd-tmpfiles[1392]: Detected autofs mount point /boot during canonicalization of boot. Jan 22 00:34:16.741132 systemd-tmpfiles[1392]: Skipping /boot Jan 22 00:34:16.754544 systemd-udevd[1393]: Using default interface naming scheme 'v257'. Jan 22 00:34:16.839545 zram_generator::config[1451]: No configuration found. Jan 22 00:34:16.961541 kernel: mousedev: PS/2 mouse device common for all mice Jan 22 00:34:16.982438 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 22 00:34:17.038547 kernel: ACPI: button: Power Button [PWRF] Jan 22 00:34:17.056535 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 22 00:34:17.060544 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 22 00:34:17.065664 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 22 00:34:17.065942 systemd[1]: Reloading finished in 351 ms. Jan 22 00:34:17.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:17.077425 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 22 00:34:17.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:17.086639 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 22 00:34:17.091000 audit: BPF prog-id=43 op=LOAD Jan 22 00:34:17.091000 audit: BPF prog-id=44 op=LOAD Jan 22 00:34:17.091000 audit: BPF prog-id=28 op=UNLOAD Jan 22 00:34:17.091000 audit: BPF prog-id=29 op=UNLOAD Jan 22 00:34:17.092000 audit: BPF prog-id=45 op=LOAD Jan 22 00:34:17.092000 audit: BPF prog-id=31 op=UNLOAD Jan 22 00:34:17.092000 audit: BPF prog-id=46 op=LOAD Jan 22 00:34:17.092000 audit: BPF prog-id=47 op=LOAD Jan 22 00:34:17.092000 audit: BPF prog-id=32 op=UNLOAD Jan 22 00:34:17.092000 audit: BPF prog-id=33 op=UNLOAD Jan 22 00:34:17.093000 audit: BPF prog-id=48 op=LOAD Jan 22 00:34:17.093000 audit: BPF prog-id=37 op=UNLOAD Jan 22 00:34:17.093000 audit: BPF prog-id=49 op=LOAD Jan 22 00:34:17.093000 audit: BPF prog-id=50 op=LOAD Jan 22 00:34:17.093000 audit: BPF prog-id=38 op=UNLOAD Jan 22 00:34:17.093000 audit: BPF prog-id=39 op=UNLOAD Jan 22 00:34:17.095000 audit: BPF prog-id=51 op=LOAD Jan 22 00:34:17.095000 audit: BPF prog-id=40 op=UNLOAD Jan 22 00:34:17.095000 audit: BPF prog-id=52 op=LOAD Jan 22 00:34:17.096000 audit: BPF prog-id=53 op=LOAD Jan 22 00:34:17.096000 audit: BPF prog-id=41 op=UNLOAD Jan 22 00:34:17.096000 audit: BPF prog-id=42 op=UNLOAD Jan 22 00:34:17.096000 audit: BPF prog-id=54 op=LOAD Jan 22 00:34:17.096000 audit: BPF prog-id=34 op=UNLOAD Jan 22 00:34:17.096000 audit: BPF prog-id=55 op=LOAD Jan 22 00:34:17.096000 audit: BPF prog-id=56 op=LOAD Jan 22 00:34:17.096000 audit: BPF prog-id=35 op=UNLOAD Jan 22 00:34:17.096000 audit: BPF prog-id=36 op=UNLOAD Jan 22 00:34:17.098000 audit: BPF prog-id=57 op=LOAD Jan 22 00:34:17.102000 audit: BPF prog-id=30 op=UNLOAD Jan 22 00:34:17.124354 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 22 00:34:17.128315 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 22 00:34:17.130751 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 22 00:34:17.139000 audit: BPF prog-id=58 op=LOAD Jan 22 00:34:17.136715 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 22 00:34:17.143830 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 22 00:34:17.151777 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 22 00:34:17.165539 kernel: EDAC MC: Ver: 3.0.0 Jan 22 00:34:17.168946 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 22 00:34:17.169153 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 22 00:34:17.172582 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 22 00:34:17.180860 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 22 00:34:17.202005 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 22 00:34:17.203701 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 22 00:34:17.203963 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 22 00:34:17.204103 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 22 00:34:17.204239 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 22 00:34:17.216626 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 22 00:34:17.216838 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 22 00:34:17.217278 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 22 00:34:17.217553 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 22 00:34:17.217688 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 22 00:34:17.224044 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 22 00:34:17.226584 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 22 00:34:17.239918 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 22 00:34:17.241737 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 22 00:34:17.244200 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 22 00:34:17.245108 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 22 00:34:17.245357 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 22 00:34:17.245492 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 22 00:34:17.245000 audit[1516]: SYSTEM_BOOT pid=1516 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 22 00:34:17.247716 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 22 00:34:17.274589 systemd[1]: Finished ensure-sysext.service. Jan 22 00:34:17.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:17.278000 audit: BPF prog-id=59 op=LOAD Jan 22 00:34:17.281813 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 22 00:34:17.284468 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 22 00:34:17.287671 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 22 00:34:17.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:17.289000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:17.290202 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 22 00:34:17.290446 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 22 00:34:17.291000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:17.291000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:17.293498 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 22 00:34:17.296000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:17.303812 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 22 00:34:17.327824 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 22 00:34:17.328400 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 22 00:34:17.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:17.329000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:17.356864 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 22 00:34:17.358048 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 22 00:34:17.360000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:17.360000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:17.362037 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 22 00:34:17.385783 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 22 00:34:17.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:17.439069 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 22 00:34:17.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:17.440973 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 22 00:34:17.471000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 22 00:34:17.471000 audit[1561]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffeaa8d2cc0 a2=420 a3=0 items=0 ppid=1511 pid=1561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:17.471000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 22 00:34:17.471865 augenrules[1561]: No rules Jan 22 00:34:17.474217 systemd[1]: audit-rules.service: Deactivated successfully. Jan 22 00:34:17.475661 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 22 00:34:17.523052 systemd-networkd[1515]: lo: Link UP Jan 22 00:34:17.523067 systemd-networkd[1515]: lo: Gained carrier Jan 22 00:34:17.538863 systemd-networkd[1515]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 22 00:34:17.538873 systemd-networkd[1515]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 22 00:34:17.539627 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 22 00:34:17.540906 systemd[1]: Reached target network.target - Network. Jan 22 00:34:17.546060 systemd-networkd[1515]: eth0: Link UP Jan 22 00:34:17.546302 systemd-networkd[1515]: eth0: Gained carrier Jan 22 00:34:17.546321 systemd-networkd[1515]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 22 00:34:17.547580 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 22 00:34:17.554154 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 22 00:34:17.555208 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 22 00:34:17.556674 systemd[1]: Reached target time-set.target - System Time Set. Jan 22 00:34:17.593787 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 22 00:34:17.688414 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 22 00:34:17.691889 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 22 00:34:17.696880 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 22 00:34:17.720428 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 22 00:34:17.879399 ldconfig[1513]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 22 00:34:17.882979 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 22 00:34:17.885848 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 22 00:34:17.905351 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 22 00:34:17.906439 systemd[1]: Reached target sysinit.target - System Initialization. Jan 22 00:34:17.907291 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 22 00:34:17.908091 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 22 00:34:17.908919 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 22 00:34:17.910010 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 22 00:34:17.910854 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 22 00:34:17.911669 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 22 00:34:17.912546 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 22 00:34:17.913401 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 22 00:34:17.914197 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 22 00:34:17.914232 systemd[1]: Reached target paths.target - Path Units. Jan 22 00:34:17.914940 systemd[1]: Reached target timers.target - Timer Units. Jan 22 00:34:17.917265 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 22 00:34:17.920074 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 22 00:34:17.923289 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 22 00:34:17.924329 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 22 00:34:17.925107 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 22 00:34:17.928138 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 22 00:34:17.929418 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 22 00:34:17.930967 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 22 00:34:17.932686 systemd[1]: Reached target sockets.target - Socket Units. Jan 22 00:34:17.933459 systemd[1]: Reached target basic.target - Basic System. Jan 22 00:34:17.934193 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 22 00:34:17.934231 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 22 00:34:17.935270 systemd[1]: Starting containerd.service - containerd container runtime... Jan 22 00:34:17.938640 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 22 00:34:17.949661 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 22 00:34:17.952694 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 22 00:34:17.956471 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 22 00:34:17.959726 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 22 00:34:17.960571 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 22 00:34:17.964720 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 22 00:34:17.970721 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 22 00:34:17.975126 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 22 00:34:17.983201 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 22 00:34:17.992602 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 22 00:34:18.000874 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 22 00:34:18.002664 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 22 00:34:18.003746 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 22 00:34:18.007019 jq[1587]: false Jan 22 00:34:18.009314 systemd[1]: Starting update-engine.service - Update Engine... Jan 22 00:34:18.015712 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 22 00:34:18.023572 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 22 00:34:18.025864 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 22 00:34:18.028238 google_oslogin_nss_cache[1589]: oslogin_cache_refresh[1589]: Refreshing passwd entry cache Jan 22 00:34:18.029767 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 22 00:34:18.030654 oslogin_cache_refresh[1589]: Refreshing passwd entry cache Jan 22 00:34:18.031933 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 22 00:34:18.032594 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 22 00:34:18.036288 google_oslogin_nss_cache[1589]: oslogin_cache_refresh[1589]: Failure getting users, quitting Jan 22 00:34:18.036341 oslogin_cache_refresh[1589]: Failure getting users, quitting Jan 22 00:34:18.036400 google_oslogin_nss_cache[1589]: oslogin_cache_refresh[1589]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 22 00:34:18.036432 oslogin_cache_refresh[1589]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 22 00:34:18.036539 google_oslogin_nss_cache[1589]: oslogin_cache_refresh[1589]: Refreshing group entry cache Jan 22 00:34:18.036580 oslogin_cache_refresh[1589]: Refreshing group entry cache Jan 22 00:34:18.037091 google_oslogin_nss_cache[1589]: oslogin_cache_refresh[1589]: Failure getting groups, quitting Jan 22 00:34:18.037372 oslogin_cache_refresh[1589]: Failure getting groups, quitting Jan 22 00:34:18.037436 google_oslogin_nss_cache[1589]: oslogin_cache_refresh[1589]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 22 00:34:18.037464 oslogin_cache_refresh[1589]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 22 00:34:18.041156 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 22 00:34:18.042074 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 22 00:34:18.060670 jq[1599]: true Jan 22 00:34:18.088579 update_engine[1597]: I20260122 00:34:18.088274 1597 main.cc:92] Flatcar Update Engine starting Jan 22 00:34:18.094109 extend-filesystems[1588]: Found /dev/sda6 Jan 22 00:34:18.101533 jq[1622]: true Jan 22 00:34:18.101835 tar[1605]: linux-amd64/LICENSE Jan 22 00:34:18.101835 tar[1605]: linux-amd64/helm Jan 22 00:34:18.102412 extend-filesystems[1588]: Found /dev/sda9 Jan 22 00:34:18.120195 extend-filesystems[1588]: Checking size of /dev/sda9 Jan 22 00:34:18.121044 systemd[1]: motdgen.service: Deactivated successfully. Jan 22 00:34:18.122570 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 22 00:34:18.138414 systemd-logind[1596]: Watching system buttons on /dev/input/event2 (Power Button) Jan 22 00:34:18.138739 systemd-logind[1596]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 22 00:34:18.139709 systemd-logind[1596]: New seat seat0. Jan 22 00:34:18.142090 systemd[1]: Started systemd-logind.service - User Login Management. Jan 22 00:34:18.154278 dbus-daemon[1585]: [system] SELinux support is enabled Jan 22 00:34:18.154679 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 22 00:34:18.164884 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 22 00:34:18.164915 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 22 00:34:18.166269 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 22 00:34:18.166287 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 22 00:34:18.176751 dbus-daemon[1585]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 22 00:34:18.178543 systemd[1]: Started update-engine.service - Update Engine. Jan 22 00:34:18.179496 update_engine[1597]: I20260122 00:34:18.179453 1597 update_check_scheduler.cc:74] Next update check in 3m27s Jan 22 00:34:18.183765 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 22 00:34:18.189636 extend-filesystems[1588]: Resized partition /dev/sda9 Jan 22 00:34:18.192541 coreos-metadata[1584]: Jan 22 00:34:18.192 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Jan 22 00:34:18.195502 extend-filesystems[1646]: resize2fs 1.47.3 (8-Jul-2025) Jan 22 00:34:18.203538 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 19377147 blocks Jan 22 00:34:18.288629 systemd-networkd[1515]: eth0: DHCPv4 address 172.232.4.171/24, gateway 172.232.4.1 acquired from 23.213.15.242 Jan 22 00:34:18.289493 systemd-timesyncd[1531]: Network configuration changed, trying to establish connection. Jan 22 00:34:18.294262 dbus-daemon[1585]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1515 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 22 00:34:18.302390 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 22 00:34:18.335067 bash[1656]: Updated "/home/core/.ssh/authorized_keys" Jan 22 00:34:18.340590 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 22 00:34:18.346839 systemd[1]: Starting sshkeys.service... Jan 22 00:34:18.411022 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 22 00:34:18.420013 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 22 00:34:18.564261 coreos-metadata[1670]: Jan 22 00:34:18.564 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Jan 22 00:34:18.568535 kernel: EXT4-fs (sda9): resized filesystem to 19377147 Jan 22 00:34:18.573525 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 22 00:34:18.576000 dbus-daemon[1585]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 22 00:34:18.576593 dbus-daemon[1585]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1663 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 22 00:34:18.579718 systemd-timesyncd[1531]: Contacted time server 172.235.154.118:123 (0.flatcar.pool.ntp.org). Jan 22 00:34:18.581075 extend-filesystems[1646]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 22 00:34:18.581075 extend-filesystems[1646]: old_desc_blocks = 1, new_desc_blocks = 10 Jan 22 00:34:18.581075 extend-filesystems[1646]: The filesystem on /dev/sda9 is now 19377147 (4k) blocks long. Jan 22 00:34:18.579768 systemd-timesyncd[1531]: Initial clock synchronization to Thu 2026-01-22 00:34:18.789404 UTC. Jan 22 00:34:18.589539 containerd[1629]: time="2026-01-22T00:34:18Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 22 00:34:18.589758 extend-filesystems[1588]: Resized filesystem in /dev/sda9 Jan 22 00:34:18.584404 systemd[1]: Starting polkit.service - Authorization Manager... Jan 22 00:34:18.587159 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 22 00:34:18.587453 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 22 00:34:18.595863 containerd[1629]: time="2026-01-22T00:34:18.595828754Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 22 00:34:18.599999 locksmithd[1643]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 22 00:34:18.622463 containerd[1629]: time="2026-01-22T00:34:18.620115966Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.41µs" Jan 22 00:34:18.622463 containerd[1629]: time="2026-01-22T00:34:18.620141956Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 22 00:34:18.622463 containerd[1629]: time="2026-01-22T00:34:18.620177866Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 22 00:34:18.622463 containerd[1629]: time="2026-01-22T00:34:18.620189186Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 22 00:34:18.622463 containerd[1629]: time="2026-01-22T00:34:18.620338966Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 22 00:34:18.622463 containerd[1629]: time="2026-01-22T00:34:18.620357426Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 22 00:34:18.622463 containerd[1629]: time="2026-01-22T00:34:18.620421567Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 22 00:34:18.622463 containerd[1629]: time="2026-01-22T00:34:18.620436297Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 22 00:34:18.622711 containerd[1629]: time="2026-01-22T00:34:18.622680318Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 22 00:34:18.622711 containerd[1629]: time="2026-01-22T00:34:18.622707268Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 22 00:34:18.622755 containerd[1629]: time="2026-01-22T00:34:18.622720078Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 22 00:34:18.622755 containerd[1629]: time="2026-01-22T00:34:18.622728618Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 22 00:34:18.622959 containerd[1629]: time="2026-01-22T00:34:18.622931848Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 22 00:34:18.622959 containerd[1629]: time="2026-01-22T00:34:18.622953258Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 22 00:34:18.623225 containerd[1629]: time="2026-01-22T00:34:18.623047558Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 22 00:34:18.623439 containerd[1629]: time="2026-01-22T00:34:18.623414978Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 22 00:34:18.623471 containerd[1629]: time="2026-01-22T00:34:18.623454468Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 22 00:34:18.623471 containerd[1629]: time="2026-01-22T00:34:18.623464988Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 22 00:34:18.623525 containerd[1629]: time="2026-01-22T00:34:18.623488418Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 22 00:34:18.623669 containerd[1629]: time="2026-01-22T00:34:18.623642728Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 22 00:34:18.623737 containerd[1629]: time="2026-01-22T00:34:18.623713528Z" level=info msg="metadata content store policy set" policy=shared Jan 22 00:34:18.628565 containerd[1629]: time="2026-01-22T00:34:18.628524641Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 22 00:34:18.628602 containerd[1629]: time="2026-01-22T00:34:18.628588631Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 22 00:34:18.628735 containerd[1629]: time="2026-01-22T00:34:18.628708901Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 22 00:34:18.628758 containerd[1629]: time="2026-01-22T00:34:18.628730081Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 22 00:34:18.628793 containerd[1629]: time="2026-01-22T00:34:18.628768961Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 22 00:34:18.628793 containerd[1629]: time="2026-01-22T00:34:18.628780721Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 22 00:34:18.628830 containerd[1629]: time="2026-01-22T00:34:18.628796891Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 22 00:34:18.628830 containerd[1629]: time="2026-01-22T00:34:18.628805651Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 22 00:34:18.628864 containerd[1629]: time="2026-01-22T00:34:18.628833841Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 22 00:34:18.628864 containerd[1629]: time="2026-01-22T00:34:18.628845931Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 22 00:34:18.628864 containerd[1629]: time="2026-01-22T00:34:18.628854751Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 22 00:34:18.628864 containerd[1629]: time="2026-01-22T00:34:18.628863701Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 22 00:34:18.628939 containerd[1629]: time="2026-01-22T00:34:18.628873231Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 22 00:34:18.628939 containerd[1629]: time="2026-01-22T00:34:18.628887821Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 22 00:34:18.629126 containerd[1629]: time="2026-01-22T00:34:18.629027441Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 22 00:34:18.629154 containerd[1629]: time="2026-01-22T00:34:18.629128681Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 22 00:34:18.629154 containerd[1629]: time="2026-01-22T00:34:18.629142241Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 22 00:34:18.629203 containerd[1629]: time="2026-01-22T00:34:18.629156831Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 22 00:34:18.629203 containerd[1629]: time="2026-01-22T00:34:18.629185091Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 22 00:34:18.629203 containerd[1629]: time="2026-01-22T00:34:18.629194881Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 22 00:34:18.629254 containerd[1629]: time="2026-01-22T00:34:18.629205351Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 22 00:34:18.629254 containerd[1629]: time="2026-01-22T00:34:18.629215841Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 22 00:34:18.629254 containerd[1629]: time="2026-01-22T00:34:18.629234301Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 22 00:34:18.630529 containerd[1629]: time="2026-01-22T00:34:18.630228651Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 22 00:34:18.630529 containerd[1629]: time="2026-01-22T00:34:18.630245561Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 22 00:34:18.630529 containerd[1629]: time="2026-01-22T00:34:18.630264201Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 22 00:34:18.630529 containerd[1629]: time="2026-01-22T00:34:18.630320811Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 22 00:34:18.630529 containerd[1629]: time="2026-01-22T00:34:18.630332801Z" level=info msg="Start snapshots syncer" Jan 22 00:34:18.630529 containerd[1629]: time="2026-01-22T00:34:18.630449592Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 22 00:34:18.630902 containerd[1629]: time="2026-01-22T00:34:18.630859772Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 22 00:34:18.630997 containerd[1629]: time="2026-01-22T00:34:18.630929212Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 22 00:34:18.631020 containerd[1629]: time="2026-01-22T00:34:18.630995612Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 22 00:34:18.631188 containerd[1629]: time="2026-01-22T00:34:18.631160882Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 22 00:34:18.631223 containerd[1629]: time="2026-01-22T00:34:18.631187132Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 22 00:34:18.631412 containerd[1629]: time="2026-01-22T00:34:18.631391202Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 22 00:34:18.631436 containerd[1629]: time="2026-01-22T00:34:18.631410942Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 22 00:34:18.631436 containerd[1629]: time="2026-01-22T00:34:18.631421802Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 22 00:34:18.631436 containerd[1629]: time="2026-01-22T00:34:18.631431392Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 22 00:34:18.631762 containerd[1629]: time="2026-01-22T00:34:18.631741022Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 22 00:34:18.631762 containerd[1629]: time="2026-01-22T00:34:18.631760862Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 22 00:34:18.631820 containerd[1629]: time="2026-01-22T00:34:18.631771482Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 22 00:34:18.631840 containerd[1629]: time="2026-01-22T00:34:18.631825432Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 22 00:34:18.631859 containerd[1629]: time="2026-01-22T00:34:18.631838192Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 22 00:34:18.631859 containerd[1629]: time="2026-01-22T00:34:18.631846032Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 22 00:34:18.631859 containerd[1629]: time="2026-01-22T00:34:18.631855072Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 22 00:34:18.631909 containerd[1629]: time="2026-01-22T00:34:18.631862402Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 22 00:34:18.631909 containerd[1629]: time="2026-01-22T00:34:18.631898512Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 22 00:34:18.631948 containerd[1629]: time="2026-01-22T00:34:18.631909342Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 22 00:34:18.631948 containerd[1629]: time="2026-01-22T00:34:18.631925422Z" level=info msg="runtime interface created" Jan 22 00:34:18.631948 containerd[1629]: time="2026-01-22T00:34:18.631931352Z" level=info msg="created NRI interface" Jan 22 00:34:18.631996 containerd[1629]: time="2026-01-22T00:34:18.631938582Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 22 00:34:18.633837 containerd[1629]: time="2026-01-22T00:34:18.632777963Z" level=info msg="Connect containerd service" Jan 22 00:34:18.633837 containerd[1629]: time="2026-01-22T00:34:18.632811663Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 22 00:34:18.634467 containerd[1629]: time="2026-01-22T00:34:18.634258923Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 22 00:34:18.699289 coreos-metadata[1670]: Jan 22 00:34:18.691 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Jan 22 00:34:18.705151 polkitd[1678]: Started polkitd version 126 Jan 22 00:34:18.709951 polkitd[1678]: Loading rules from directory /etc/polkit-1/rules.d Jan 22 00:34:18.711266 polkitd[1678]: Loading rules from directory /run/polkit-1/rules.d Jan 22 00:34:18.711405 polkitd[1678]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 22 00:34:18.712701 polkitd[1678]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jan 22 00:34:18.712740 polkitd[1678]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 22 00:34:18.712779 polkitd[1678]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 22 00:34:18.715654 polkitd[1678]: Finished loading, compiling and executing 2 rules Jan 22 00:34:18.715974 systemd[1]: Started polkit.service - Authorization Manager. Jan 22 00:34:18.717153 dbus-daemon[1585]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 22 00:34:18.718103 polkitd[1678]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 22 00:34:18.742757 systemd-hostnamed[1663]: Hostname set to <172-232-4-171> (transient) Jan 22 00:34:18.746599 systemd-resolved[1296]: System hostname changed to '172-232-4-171'. Jan 22 00:34:18.827982 coreos-metadata[1670]: Jan 22 00:34:18.827 INFO Fetch successful Jan 22 00:34:18.835810 containerd[1629]: time="2026-01-22T00:34:18.835771764Z" level=info msg="Start subscribing containerd event" Jan 22 00:34:18.836377 containerd[1629]: time="2026-01-22T00:34:18.836336294Z" level=info msg="Start recovering state" Jan 22 00:34:18.837201 containerd[1629]: time="2026-01-22T00:34:18.836504604Z" level=info msg="Start event monitor" Jan 22 00:34:18.837271 containerd[1629]: time="2026-01-22T00:34:18.837253875Z" level=info msg="Start cni network conf syncer for default" Jan 22 00:34:18.837460 containerd[1629]: time="2026-01-22T00:34:18.837446415Z" level=info msg="Start streaming server" Jan 22 00:34:18.837522 containerd[1629]: time="2026-01-22T00:34:18.837495205Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 22 00:34:18.837582 containerd[1629]: time="2026-01-22T00:34:18.837570855Z" level=info msg="runtime interface starting up..." Jan 22 00:34:18.837629 containerd[1629]: time="2026-01-22T00:34:18.837618865Z" level=info msg="starting plugins..." Jan 22 00:34:18.837726 containerd[1629]: time="2026-01-22T00:34:18.837714535Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 22 00:34:18.839435 containerd[1629]: time="2026-01-22T00:34:18.839309036Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 22 00:34:18.839495 containerd[1629]: time="2026-01-22T00:34:18.839481226Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 22 00:34:18.840134 containerd[1629]: time="2026-01-22T00:34:18.840082516Z" level=info msg="containerd successfully booted in 0.251285s" Jan 22 00:34:18.841800 systemd[1]: Started containerd.service - containerd container runtime. Jan 22 00:34:18.861064 update-ssh-keys[1704]: Updated "/home/core/.ssh/authorized_keys" Jan 22 00:34:18.863136 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 22 00:34:18.870031 systemd[1]: Finished sshkeys.service. Jan 22 00:34:18.963711 systemd-networkd[1515]: eth0: Gained IPv6LL Jan 22 00:34:18.968992 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 22 00:34:18.971006 systemd[1]: Reached target network-online.target - Network is Online. Jan 22 00:34:18.975831 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 22 00:34:18.978658 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 22 00:34:19.007299 tar[1605]: linux-amd64/README.md Jan 22 00:34:19.036077 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 22 00:34:19.043252 sshd_keygen[1626]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 22 00:34:19.046228 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 22 00:34:19.071321 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 22 00:34:19.074478 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 22 00:34:19.100782 systemd[1]: issuegen.service: Deactivated successfully. Jan 22 00:34:19.101110 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 22 00:34:19.104790 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 22 00:34:19.123503 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 22 00:34:19.126850 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 22 00:34:19.131695 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 22 00:34:19.132671 systemd[1]: Reached target getty.target - Login Prompts. Jan 22 00:34:19.190308 coreos-metadata[1584]: Jan 22 00:34:19.189 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Jan 22 00:34:19.286685 coreos-metadata[1584]: Jan 22 00:34:19.286 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Jan 22 00:34:19.477565 coreos-metadata[1584]: Jan 22 00:34:19.476 INFO Fetch successful Jan 22 00:34:19.477565 coreos-metadata[1584]: Jan 22 00:34:19.476 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Jan 22 00:34:19.744429 coreos-metadata[1584]: Jan 22 00:34:19.744 INFO Fetch successful Jan 22 00:34:19.877499 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 22 00:34:19.879641 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 22 00:34:19.956353 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 22 00:34:19.957750 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 22 00:34:19.960009 systemd[1]: Startup finished in 2.985s (kernel) + 5.939s (initrd) + 5.334s (userspace) = 14.259s. Jan 22 00:34:19.969036 (kubelet)[1764]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 22 00:34:20.451238 kubelet[1764]: E0122 00:34:20.451176 1764 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 22 00:34:20.454826 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 22 00:34:20.455285 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 22 00:34:20.456215 systemd[1]: kubelet.service: Consumed 837ms CPU time, 256.9M memory peak. Jan 22 00:34:21.473463 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 22 00:34:21.475097 systemd[1]: Started sshd@0-172.232.4.171:22-20.161.92.111:60780.service - OpenSSH per-connection server daemon (20.161.92.111:60780). Jan 22 00:34:21.701498 sshd[1776]: Accepted publickey for core from 20.161.92.111 port 60780 ssh2: RSA SHA256:Scpzv+CdshzsDE47WY8VrFlF7F5SiGsSfXbWwUnoTgA Jan 22 00:34:21.703445 sshd-session[1776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:34:21.710358 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 22 00:34:21.712144 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 22 00:34:21.718056 systemd-logind[1596]: New session 1 of user core. Jan 22 00:34:21.729505 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 22 00:34:21.732758 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 22 00:34:21.753179 (systemd)[1781]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 22 00:34:21.755930 systemd-logind[1596]: New session c1 of user core. Jan 22 00:34:21.884380 systemd[1781]: Queued start job for default target default.target. Jan 22 00:34:21.896196 systemd[1781]: Created slice app.slice - User Application Slice. Jan 22 00:34:21.896232 systemd[1781]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 22 00:34:21.896247 systemd[1781]: Reached target paths.target - Paths. Jan 22 00:34:21.896294 systemd[1781]: Reached target timers.target - Timers. Jan 22 00:34:21.897858 systemd[1781]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 22 00:34:21.900688 systemd[1781]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 22 00:34:21.916051 systemd[1781]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 22 00:34:21.917370 systemd[1781]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 22 00:34:21.917507 systemd[1781]: Reached target sockets.target - Sockets. Jan 22 00:34:21.917704 systemd[1781]: Reached target basic.target - Basic System. Jan 22 00:34:21.917905 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 22 00:34:21.919376 systemd[1781]: Reached target default.target - Main User Target. Jan 22 00:34:21.919426 systemd[1781]: Startup finished in 157ms. Jan 22 00:34:21.922701 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 22 00:34:22.025365 systemd[1]: Started sshd@1-172.232.4.171:22-20.161.92.111:60792.service - OpenSSH per-connection server daemon (20.161.92.111:60792). Jan 22 00:34:22.169564 sshd[1794]: Accepted publickey for core from 20.161.92.111 port 60792 ssh2: RSA SHA256:Scpzv+CdshzsDE47WY8VrFlF7F5SiGsSfXbWwUnoTgA Jan 22 00:34:22.170778 sshd-session[1794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:34:22.177389 systemd-logind[1596]: New session 2 of user core. Jan 22 00:34:22.182670 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 22 00:34:22.239332 sshd[1797]: Connection closed by 20.161.92.111 port 60792 Jan 22 00:34:22.240697 sshd-session[1794]: pam_unix(sshd:session): session closed for user core Jan 22 00:34:22.244344 systemd-logind[1596]: Session 2 logged out. Waiting for processes to exit. Jan 22 00:34:22.244974 systemd[1]: sshd@1-172.232.4.171:22-20.161.92.111:60792.service: Deactivated successfully. Jan 22 00:34:22.246900 systemd[1]: session-2.scope: Deactivated successfully. Jan 22 00:34:22.248622 systemd-logind[1596]: Removed session 2. Jan 22 00:34:22.273043 systemd[1]: Started sshd@2-172.232.4.171:22-20.161.92.111:41470.service - OpenSSH per-connection server daemon (20.161.92.111:41470). Jan 22 00:34:22.426688 sshd[1803]: Accepted publickey for core from 20.161.92.111 port 41470 ssh2: RSA SHA256:Scpzv+CdshzsDE47WY8VrFlF7F5SiGsSfXbWwUnoTgA Jan 22 00:34:22.429994 sshd-session[1803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:34:22.436020 systemd-logind[1596]: New session 3 of user core. Jan 22 00:34:22.441674 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 22 00:34:22.492963 sshd[1806]: Connection closed by 20.161.92.111 port 41470 Jan 22 00:34:22.494849 sshd-session[1803]: pam_unix(sshd:session): session closed for user core Jan 22 00:34:22.499292 systemd[1]: sshd@2-172.232.4.171:22-20.161.92.111:41470.service: Deactivated successfully. Jan 22 00:34:22.501434 systemd[1]: session-3.scope: Deactivated successfully. Jan 22 00:34:22.502697 systemd-logind[1596]: Session 3 logged out. Waiting for processes to exit. Jan 22 00:34:22.503785 systemd-logind[1596]: Removed session 3. Jan 22 00:34:22.524781 systemd[1]: Started sshd@3-172.232.4.171:22-20.161.92.111:41476.service - OpenSSH per-connection server daemon (20.161.92.111:41476). Jan 22 00:34:22.679445 sshd[1812]: Accepted publickey for core from 20.161.92.111 port 41476 ssh2: RSA SHA256:Scpzv+CdshzsDE47WY8VrFlF7F5SiGsSfXbWwUnoTgA Jan 22 00:34:22.681464 sshd-session[1812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:34:22.687709 systemd-logind[1596]: New session 4 of user core. Jan 22 00:34:22.693703 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 22 00:34:22.750898 sshd[1815]: Connection closed by 20.161.92.111 port 41476 Jan 22 00:34:22.751906 sshd-session[1812]: pam_unix(sshd:session): session closed for user core Jan 22 00:34:22.756209 systemd-logind[1596]: Session 4 logged out. Waiting for processes to exit. Jan 22 00:34:22.756652 systemd[1]: sshd@3-172.232.4.171:22-20.161.92.111:41476.service: Deactivated successfully. Jan 22 00:34:22.758748 systemd[1]: session-4.scope: Deactivated successfully. Jan 22 00:34:22.760864 systemd-logind[1596]: Removed session 4. Jan 22 00:34:22.789032 systemd[1]: Started sshd@4-172.232.4.171:22-20.161.92.111:41486.service - OpenSSH per-connection server daemon (20.161.92.111:41486). Jan 22 00:34:22.948627 sshd[1821]: Accepted publickey for core from 20.161.92.111 port 41486 ssh2: RSA SHA256:Scpzv+CdshzsDE47WY8VrFlF7F5SiGsSfXbWwUnoTgA Jan 22 00:34:22.950403 sshd-session[1821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:34:22.957689 systemd-logind[1596]: New session 5 of user core. Jan 22 00:34:22.966677 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 22 00:34:23.014298 sudo[1825]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 22 00:34:23.014655 sudo[1825]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 22 00:34:23.031714 sudo[1825]: pam_unix(sudo:session): session closed for user root Jan 22 00:34:23.055406 sshd[1824]: Connection closed by 20.161.92.111 port 41486 Jan 22 00:34:23.056786 sshd-session[1821]: pam_unix(sshd:session): session closed for user core Jan 22 00:34:23.061575 systemd[1]: sshd@4-172.232.4.171:22-20.161.92.111:41486.service: Deactivated successfully. Jan 22 00:34:23.063679 systemd[1]: session-5.scope: Deactivated successfully. Jan 22 00:34:23.065125 systemd-logind[1596]: Session 5 logged out. Waiting for processes to exit. Jan 22 00:34:23.067338 systemd-logind[1596]: Removed session 5. Jan 22 00:34:23.086778 systemd[1]: Started sshd@5-172.232.4.171:22-20.161.92.111:41488.service - OpenSSH per-connection server daemon (20.161.92.111:41488). Jan 22 00:34:23.239743 sshd[1831]: Accepted publickey for core from 20.161.92.111 port 41488 ssh2: RSA SHA256:Scpzv+CdshzsDE47WY8VrFlF7F5SiGsSfXbWwUnoTgA Jan 22 00:34:23.241231 sshd-session[1831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:34:23.247451 systemd-logind[1596]: New session 6 of user core. Jan 22 00:34:23.253665 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 22 00:34:23.289708 sudo[1836]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 22 00:34:23.290037 sudo[1836]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 22 00:34:23.295883 sudo[1836]: pam_unix(sudo:session): session closed for user root Jan 22 00:34:23.304632 sudo[1835]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 22 00:34:23.304951 sudo[1835]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 22 00:34:23.315271 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 22 00:34:23.358000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 22 00:34:23.359855 kernel: kauditd_printk_skb: 75 callbacks suppressed Jan 22 00:34:23.359895 kernel: audit: type=1305 audit(1769042063.358:229): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 22 00:34:23.358000 audit[1858]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe21503cb0 a2=420 a3=0 items=0 ppid=1839 pid=1858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:23.367238 kernel: audit: type=1300 audit(1769042063.358:229): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe21503cb0 a2=420 a3=0 items=0 ppid=1839 pid=1858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:23.376493 augenrules[1858]: No rules Jan 22 00:34:23.358000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 22 00:34:23.377550 kernel: audit: type=1327 audit(1769042063.358:229): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 22 00:34:23.379424 systemd[1]: audit-rules.service: Deactivated successfully. Jan 22 00:34:23.380055 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 22 00:34:23.382041 kernel: audit: type=1130 audit(1769042063.379:230): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:23.379000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:23.381493 sudo[1835]: pam_unix(sudo:session): session closed for user root Jan 22 00:34:23.379000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:23.389061 kernel: audit: type=1131 audit(1769042063.379:231): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:23.382000 audit[1835]: USER_END pid=1835 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 22 00:34:23.395668 kernel: audit: type=1106 audit(1769042063.382:232): pid=1835 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 22 00:34:23.382000 audit[1835]: CRED_DISP pid=1835 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 22 00:34:23.402557 kernel: audit: type=1104 audit(1769042063.382:233): pid=1835 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 22 00:34:23.408685 sshd[1834]: Connection closed by 20.161.92.111 port 41488 Jan 22 00:34:23.409745 sshd-session[1831]: pam_unix(sshd:session): session closed for user core Jan 22 00:34:23.410000 audit[1831]: USER_END pid=1831 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:34:23.413931 systemd-logind[1596]: Session 6 logged out. Waiting for processes to exit. Jan 22 00:34:23.414991 systemd[1]: sshd@5-172.232.4.171:22-20.161.92.111:41488.service: Deactivated successfully. Jan 22 00:34:23.417852 systemd[1]: session-6.scope: Deactivated successfully. Jan 22 00:34:23.410000 audit[1831]: CRED_DISP pid=1831 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:34:23.421643 kernel: audit: type=1106 audit(1769042063.410:234): pid=1831 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:34:23.421686 kernel: audit: type=1104 audit(1769042063.410:235): pid=1831 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:34:23.421855 systemd-logind[1596]: Removed session 6. Jan 22 00:34:23.412000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.232.4.171:22-20.161.92.111:41488 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:23.428623 kernel: audit: type=1131 audit(1769042063.412:236): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.232.4.171:22-20.161.92.111:41488 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:23.444732 systemd[1]: Started sshd@6-172.232.4.171:22-20.161.92.111:41502.service - OpenSSH per-connection server daemon (20.161.92.111:41502). Jan 22 00:34:23.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.232.4.171:22-20.161.92.111:41502 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:23.609000 audit[1867]: USER_ACCT pid=1867 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:34:23.610735 sshd[1867]: Accepted publickey for core from 20.161.92.111 port 41502 ssh2: RSA SHA256:Scpzv+CdshzsDE47WY8VrFlF7F5SiGsSfXbWwUnoTgA Jan 22 00:34:23.611000 audit[1867]: CRED_ACQ pid=1867 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:34:23.611000 audit[1867]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd62c68410 a2=3 a3=0 items=0 ppid=1 pid=1867 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:23.611000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:34:23.612348 sshd-session[1867]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:34:23.618821 systemd-logind[1596]: New session 7 of user core. Jan 22 00:34:23.624881 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 22 00:34:23.626000 audit[1867]: USER_START pid=1867 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:34:23.629000 audit[1870]: CRED_ACQ pid=1870 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:34:23.661000 audit[1871]: USER_ACCT pid=1871 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 22 00:34:23.662383 sudo[1871]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 22 00:34:23.662000 audit[1871]: CRED_REFR pid=1871 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 22 00:34:23.662793 sudo[1871]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 22 00:34:23.664000 audit[1871]: USER_START pid=1871 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 22 00:34:24.030362 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 22 00:34:24.051888 (dockerd)[1888]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 22 00:34:24.315399 dockerd[1888]: time="2026-01-22T00:34:24.314616660Z" level=info msg="Starting up" Jan 22 00:34:24.315678 dockerd[1888]: time="2026-01-22T00:34:24.315569230Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 22 00:34:24.328220 dockerd[1888]: time="2026-01-22T00:34:24.328126139Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 22 00:34:24.374345 dockerd[1888]: time="2026-01-22T00:34:24.374314614Z" level=info msg="Loading containers: start." Jan 22 00:34:24.384545 kernel: Initializing XFRM netlink socket Jan 22 00:34:24.446000 audit[1936]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1936 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:34:24.446000 audit[1936]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffd4d4f7a60 a2=0 a3=0 items=0 ppid=1888 pid=1936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:24.446000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 22 00:34:24.449000 audit[1938]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1938 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:34:24.449000 audit[1938]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fffb7a85390 a2=0 a3=0 items=0 ppid=1888 pid=1938 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:24.449000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 22 00:34:24.451000 audit[1940]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1940 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:34:24.451000 audit[1940]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd97336dd0 a2=0 a3=0 items=0 ppid=1888 pid=1940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:24.451000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 22 00:34:24.454000 audit[1942]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1942 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:34:24.454000 audit[1942]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff3505be10 a2=0 a3=0 items=0 ppid=1888 pid=1942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:24.454000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 22 00:34:24.456000 audit[1944]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1944 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:34:24.456000 audit[1944]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffa48c5c50 a2=0 a3=0 items=0 ppid=1888 pid=1944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:24.456000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 22 00:34:24.458000 audit[1946]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1946 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:34:24.458000 audit[1946]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe65ad5a90 a2=0 a3=0 items=0 ppid=1888 pid=1946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:24.458000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 22 00:34:24.461000 audit[1948]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1948 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:34:24.461000 audit[1948]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc6d7ef680 a2=0 a3=0 items=0 ppid=1888 pid=1948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:24.461000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 22 00:34:24.463000 audit[1950]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1950 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:34:24.463000 audit[1950]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffe19a40f20 a2=0 a3=0 items=0 ppid=1888 pid=1950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:24.463000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 22 00:34:24.491000 audit[1953]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1953 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:34:24.491000 audit[1953]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7ffee09e4860 a2=0 a3=0 items=0 ppid=1888 pid=1953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:24.491000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 22 00:34:24.493000 audit[1955]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1955 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:34:24.493000 audit[1955]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffd82902c40 a2=0 a3=0 items=0 ppid=1888 pid=1955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:24.493000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 22 00:34:24.496000 audit[1957]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1957 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:34:24.496000 audit[1957]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffc952dfff0 a2=0 a3=0 items=0 ppid=1888 pid=1957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:24.496000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 22 00:34:24.499000 audit[1959]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1959 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:34:24.499000 audit[1959]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffce57e8a40 a2=0 a3=0 items=0 ppid=1888 pid=1959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:24.499000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 22 00:34:24.501000 audit[1961]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1961 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:34:24.501000 audit[1961]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7fffe0f49270 a2=0 a3=0 items=0 ppid=1888 pid=1961 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:24.501000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 22 00:34:24.544000 audit[1991]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=1991 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:34:24.544000 audit[1991]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffd275c7590 a2=0 a3=0 items=0 ppid=1888 pid=1991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:24.544000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 22 00:34:24.547000 audit[1993]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=1993 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:34:24.547000 audit[1993]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffdaa39d330 a2=0 a3=0 items=0 ppid=1888 pid=1993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:24.547000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 22 00:34:24.549000 audit[1995]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=1995 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:34:24.549000 audit[1995]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff9f61eae0 a2=0 a3=0 items=0 ppid=1888 pid=1995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:24.549000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 22 00:34:24.552000 audit[1997]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=1997 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:34:24.552000 audit[1997]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd611fefb0 a2=0 a3=0 items=0 ppid=1888 pid=1997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:24.552000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 22 00:34:24.554000 audit[1999]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=1999 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:34:24.554000 audit[1999]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffcaf0f5e70 a2=0 a3=0 items=0 ppid=1888 pid=1999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:24.554000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 22 00:34:24.556000 audit[2001]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2001 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:34:24.556000 audit[2001]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffef14ea160 a2=0 a3=0 items=0 ppid=1888 pid=2001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:24.556000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 22 00:34:24.559000 audit[2003]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2003 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:34:24.559000 audit[2003]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffca4484c90 a2=0 a3=0 items=0 ppid=1888 pid=2003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:24.559000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 22 00:34:24.559000 audit[2005]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2005 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:34:24.559000 audit[2005]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffe54e4da40 a2=0 a3=0 items=0 ppid=1888 pid=2005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:24.559000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 22 00:34:24.565000 audit[2007]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2007 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:34:24.565000 audit[2007]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffc74b16470 a2=0 a3=0 items=0 ppid=1888 pid=2007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:24.565000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 22 00:34:24.569000 audit[2009]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2009 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:34:24.569000 audit[2009]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffdc3475380 a2=0 a3=0 items=0 ppid=1888 pid=2009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:24.569000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 22 00:34:24.572000 audit[2011]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2011 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:34:24.572000 audit[2011]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7fffeb2d35f0 a2=0 a3=0 items=0 ppid=1888 pid=2011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:24.572000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 22 00:34:24.574000 audit[2013]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2013 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:34:24.574000 audit[2013]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffeb580a320 a2=0 a3=0 items=0 ppid=1888 pid=2013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:24.574000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 22 00:34:24.576000 audit[2015]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2015 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:34:24.576000 audit[2015]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffc1f865910 a2=0 a3=0 items=0 ppid=1888 pid=2015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:24.576000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 22 00:34:24.582000 audit[2020]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2020 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:34:24.582000 audit[2020]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc26553700 a2=0 a3=0 items=0 ppid=1888 pid=2020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:24.582000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 22 00:34:24.585000 audit[2022]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2022 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:34:24.585000 audit[2022]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffd01941540 a2=0 a3=0 items=0 ppid=1888 pid=2022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:24.585000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 22 00:34:24.587000 audit[2024]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2024 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:34:24.587000 audit[2024]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fff8be51070 a2=0 a3=0 items=0 ppid=1888 pid=2024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:24.587000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 22 00:34:24.590000 audit[2026]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2026 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:34:24.590000 audit[2026]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffce9778780 a2=0 a3=0 items=0 ppid=1888 pid=2026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:24.590000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 22 00:34:24.592000 audit[2028]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2028 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:34:24.592000 audit[2028]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffd4bcc8b70 a2=0 a3=0 items=0 ppid=1888 pid=2028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:24.592000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 22 00:34:24.595000 audit[2030]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2030 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:34:24.595000 audit[2030]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffd67c2a000 a2=0 a3=0 items=0 ppid=1888 pid=2030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:24.595000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 22 00:34:24.617000 audit[2036]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2036 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:34:24.617000 audit[2036]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7fff9f9de5a0 a2=0 a3=0 items=0 ppid=1888 pid=2036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:24.617000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 22 00:34:24.620000 audit[2038]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2038 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:34:24.620000 audit[2038]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7fff1fd95630 a2=0 a3=0 items=0 ppid=1888 pid=2038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:24.620000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 22 00:34:24.632000 audit[2046]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2046 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:34:24.632000 audit[2046]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffc345cd0b0 a2=0 a3=0 items=0 ppid=1888 pid=2046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:24.632000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 22 00:34:24.643000 audit[2052]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2052 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:34:24.643000 audit[2052]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7fff1508c210 a2=0 a3=0 items=0 ppid=1888 pid=2052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:24.643000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 22 00:34:24.646000 audit[2054]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2054 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:34:24.646000 audit[2054]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffef9fe5c30 a2=0 a3=0 items=0 ppid=1888 pid=2054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:24.646000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 22 00:34:24.648000 audit[2056]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2056 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:34:24.648000 audit[2056]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffdfd4d50a0 a2=0 a3=0 items=0 ppid=1888 pid=2056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:24.648000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 22 00:34:24.650000 audit[2058]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2058 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:34:24.650000 audit[2058]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffd36548ce0 a2=0 a3=0 items=0 ppid=1888 pid=2058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:24.650000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 22 00:34:24.652000 audit[2060]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2060 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:34:24.652000 audit[2060]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff7d2b6070 a2=0 a3=0 items=0 ppid=1888 pid=2060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:24.652000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 22 00:34:24.654863 systemd-networkd[1515]: docker0: Link UP Jan 22 00:34:24.657632 dockerd[1888]: time="2026-01-22T00:34:24.657598669Z" level=info msg="Loading containers: done." Jan 22 00:34:24.674997 dockerd[1888]: time="2026-01-22T00:34:24.674951150Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 22 00:34:24.675126 dockerd[1888]: time="2026-01-22T00:34:24.675011213Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 22 00:34:24.675126 dockerd[1888]: time="2026-01-22T00:34:24.675085807Z" level=info msg="Initializing buildkit" Jan 22 00:34:24.698653 dockerd[1888]: time="2026-01-22T00:34:24.698630840Z" level=info msg="Completed buildkit initialization" Jan 22 00:34:24.705533 dockerd[1888]: time="2026-01-22T00:34:24.705426426Z" level=info msg="Daemon has completed initialization" Jan 22 00:34:24.705533 dockerd[1888]: time="2026-01-22T00:34:24.705468267Z" level=info msg="API listen on /run/docker.sock" Jan 22 00:34:24.705675 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 22 00:34:24.705000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:25.297862 containerd[1629]: time="2026-01-22T00:34:25.297811818Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Jan 22 00:34:25.890478 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount17667901.mount: Deactivated successfully. Jan 22 00:34:26.646620 containerd[1629]: time="2026-01-22T00:34:26.646579002Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:34:26.647451 containerd[1629]: time="2026-01-22T00:34:26.647391426Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=25399452" Jan 22 00:34:26.647976 containerd[1629]: time="2026-01-22T00:34:26.647949177Z" level=info msg="ImageCreate event name:\"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:34:26.650248 containerd[1629]: time="2026-01-22T00:34:26.650226747Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:34:26.651013 containerd[1629]: time="2026-01-22T00:34:26.650990462Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"27064672\" in 1.353137058s" Jan 22 00:34:26.651092 containerd[1629]: time="2026-01-22T00:34:26.651077145Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\"" Jan 22 00:34:26.651783 containerd[1629]: time="2026-01-22T00:34:26.651760201Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Jan 22 00:34:27.876642 containerd[1629]: time="2026-01-22T00:34:27.876597619Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:34:27.877583 containerd[1629]: time="2026-01-22T00:34:27.877559202Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=21154285" Jan 22 00:34:27.878734 containerd[1629]: time="2026-01-22T00:34:27.878450895Z" level=info msg="ImageCreate event name:\"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:34:27.880930 containerd[1629]: time="2026-01-22T00:34:27.880864258Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:34:27.881888 containerd[1629]: time="2026-01-22T00:34:27.881865189Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"22819474\" in 1.2300836s" Jan 22 00:34:27.881972 containerd[1629]: time="2026-01-22T00:34:27.881956804Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\"" Jan 22 00:34:27.882857 containerd[1629]: time="2026-01-22T00:34:27.882838705Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Jan 22 00:34:28.831623 containerd[1629]: time="2026-01-22T00:34:28.831580281Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:34:28.832788 containerd[1629]: time="2026-01-22T00:34:28.832764423Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=0" Jan 22 00:34:28.833174 containerd[1629]: time="2026-01-22T00:34:28.833135426Z" level=info msg="ImageCreate event name:\"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:34:28.836271 containerd[1629]: time="2026-01-22T00:34:28.835477605Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:34:28.836271 containerd[1629]: time="2026-01-22T00:34:28.836175500Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"17382979\" in 953.316059ms" Jan 22 00:34:28.836271 containerd[1629]: time="2026-01-22T00:34:28.836196805Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\"" Jan 22 00:34:28.837056 containerd[1629]: time="2026-01-22T00:34:28.837012918Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Jan 22 00:34:29.813173 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3446989429.mount: Deactivated successfully. Jan 22 00:34:30.040329 containerd[1629]: time="2026-01-22T00:34:30.040277744Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:34:30.041319 containerd[1629]: time="2026-01-22T00:34:30.041290284Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=0" Jan 22 00:34:30.041895 containerd[1629]: time="2026-01-22T00:34:30.041852812Z" level=info msg="ImageCreate event name:\"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:34:30.043433 containerd[1629]: time="2026-01-22T00:34:30.043412665Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:34:30.044310 containerd[1629]: time="2026-01-22T00:34:30.044098708Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"25964312\" in 1.20693109s" Jan 22 00:34:30.044310 containerd[1629]: time="2026-01-22T00:34:30.044127659Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\"" Jan 22 00:34:30.044811 containerd[1629]: time="2026-01-22T00:34:30.044787901Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Jan 22 00:34:30.705652 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 22 00:34:30.709920 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 22 00:34:30.917635 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2249508831.mount: Deactivated successfully. Jan 22 00:34:30.945541 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 22 00:34:30.953378 kernel: kauditd_printk_skb: 132 callbacks suppressed Jan 22 00:34:30.953443 kernel: audit: type=1130 audit(1769042070.944:287): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:30.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:30.960307 (kubelet)[2193]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 22 00:34:31.005532 kubelet[2193]: E0122 00:34:31.005466 2193 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 22 00:34:31.011488 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 22 00:34:31.011689 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 22 00:34:31.011000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 22 00:34:31.012373 systemd[1]: kubelet.service: Consumed 204ms CPU time, 108.8M memory peak. Jan 22 00:34:31.018603 kernel: audit: type=1131 audit(1769042071.011:288): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 22 00:34:31.542982 containerd[1629]: time="2026-01-22T00:34:31.542932375Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:34:31.544536 containerd[1629]: time="2026-01-22T00:34:31.544326089Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=21568511" Jan 22 00:34:31.545002 containerd[1629]: time="2026-01-22T00:34:31.544965491Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:34:31.547913 containerd[1629]: time="2026-01-22T00:34:31.547886594Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:34:31.549095 containerd[1629]: time="2026-01-22T00:34:31.549057435Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.504241352s" Jan 22 00:34:31.549132 containerd[1629]: time="2026-01-22T00:34:31.549099194Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Jan 22 00:34:31.549958 containerd[1629]: time="2026-01-22T00:34:31.549940383Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Jan 22 00:34:32.061956 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2626687366.mount: Deactivated successfully. Jan 22 00:34:32.065041 containerd[1629]: time="2026-01-22T00:34:32.064992275Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:34:32.065757 containerd[1629]: time="2026-01-22T00:34:32.065735078Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=0" Jan 22 00:34:32.066752 containerd[1629]: time="2026-01-22T00:34:32.066731439Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:34:32.068688 containerd[1629]: time="2026-01-22T00:34:32.068650538Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:34:32.069121 containerd[1629]: time="2026-01-22T00:34:32.069095307Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 518.990766ms" Jan 22 00:34:32.069165 containerd[1629]: time="2026-01-22T00:34:32.069122829Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Jan 22 00:34:32.069699 containerd[1629]: time="2026-01-22T00:34:32.069499993Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Jan 22 00:34:32.555738 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3324320121.mount: Deactivated successfully. Jan 22 00:34:34.862715 containerd[1629]: time="2026-01-22T00:34:34.862639939Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:34:34.864040 containerd[1629]: time="2026-01-22T00:34:34.863865035Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=61186606" Jan 22 00:34:34.864580 containerd[1629]: time="2026-01-22T00:34:34.864549719Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:34:34.867267 containerd[1629]: time="2026-01-22T00:34:34.867235408Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:34:34.868606 containerd[1629]: time="2026-01-22T00:34:34.868293569Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 2.79847467s" Jan 22 00:34:34.868606 containerd[1629]: time="2026-01-22T00:34:34.868323751Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Jan 22 00:34:37.463281 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 22 00:34:37.463892 systemd[1]: kubelet.service: Consumed 204ms CPU time, 108.8M memory peak. Jan 22 00:34:37.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:37.467731 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 22 00:34:37.471546 kernel: audit: type=1130 audit(1769042077.461:289): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:37.461000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:37.477535 kernel: audit: type=1131 audit(1769042077.461:290): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:37.507641 systemd[1]: Reload requested from client PID 2325 ('systemctl') (unit session-7.scope)... Jan 22 00:34:37.507656 systemd[1]: Reloading... Jan 22 00:34:37.651752 zram_generator::config[2372]: No configuration found. Jan 22 00:34:37.885119 systemd[1]: Reloading finished in 377 ms. Jan 22 00:34:37.909000 audit: BPF prog-id=67 op=LOAD Jan 22 00:34:37.916076 kernel: audit: type=1334 audit(1769042077.909:291): prog-id=67 op=LOAD Jan 22 00:34:37.909000 audit: BPF prog-id=51 op=UNLOAD Jan 22 00:34:37.910000 audit: BPF prog-id=68 op=LOAD Jan 22 00:34:37.920855 kernel: audit: type=1334 audit(1769042077.909:292): prog-id=51 op=UNLOAD Jan 22 00:34:37.920895 kernel: audit: type=1334 audit(1769042077.910:293): prog-id=68 op=LOAD Jan 22 00:34:37.920952 kernel: audit: type=1334 audit(1769042077.910:294): prog-id=69 op=LOAD Jan 22 00:34:37.910000 audit: BPF prog-id=69 op=LOAD Jan 22 00:34:37.910000 audit: BPF prog-id=52 op=UNLOAD Jan 22 00:34:37.924667 kernel: audit: type=1334 audit(1769042077.910:295): prog-id=52 op=UNLOAD Jan 22 00:34:37.924704 kernel: audit: type=1334 audit(1769042077.910:296): prog-id=53 op=UNLOAD Jan 22 00:34:37.910000 audit: BPF prog-id=53 op=UNLOAD Jan 22 00:34:37.927018 kernel: audit: type=1334 audit(1769042077.911:297): prog-id=70 op=LOAD Jan 22 00:34:37.911000 audit: BPF prog-id=70 op=LOAD Jan 22 00:34:37.911000 audit: BPF prog-id=57 op=UNLOAD Jan 22 00:34:37.929763 kernel: audit: type=1334 audit(1769042077.911:298): prog-id=57 op=UNLOAD Jan 22 00:34:37.913000 audit: BPF prog-id=71 op=LOAD Jan 22 00:34:37.913000 audit: BPF prog-id=63 op=UNLOAD Jan 22 00:34:37.913000 audit: BPF prog-id=72 op=LOAD Jan 22 00:34:37.913000 audit: BPF prog-id=73 op=LOAD Jan 22 00:34:37.913000 audit: BPF prog-id=64 op=UNLOAD Jan 22 00:34:37.913000 audit: BPF prog-id=65 op=UNLOAD Jan 22 00:34:37.914000 audit: BPF prog-id=74 op=LOAD Jan 22 00:34:37.914000 audit: BPF prog-id=48 op=UNLOAD Jan 22 00:34:37.914000 audit: BPF prog-id=75 op=LOAD Jan 22 00:34:37.914000 audit: BPF prog-id=76 op=LOAD Jan 22 00:34:37.914000 audit: BPF prog-id=49 op=UNLOAD Jan 22 00:34:37.914000 audit: BPF prog-id=50 op=UNLOAD Jan 22 00:34:37.933000 audit: BPF prog-id=77 op=LOAD Jan 22 00:34:37.933000 audit: BPF prog-id=54 op=UNLOAD Jan 22 00:34:37.933000 audit: BPF prog-id=78 op=LOAD Jan 22 00:34:37.933000 audit: BPF prog-id=79 op=LOAD Jan 22 00:34:37.933000 audit: BPF prog-id=55 op=UNLOAD Jan 22 00:34:37.933000 audit: BPF prog-id=56 op=UNLOAD Jan 22 00:34:37.934000 audit: BPF prog-id=80 op=LOAD Jan 22 00:34:37.934000 audit: BPF prog-id=45 op=UNLOAD Jan 22 00:34:37.934000 audit: BPF prog-id=81 op=LOAD Jan 22 00:34:37.934000 audit: BPF prog-id=82 op=LOAD Jan 22 00:34:37.934000 audit: BPF prog-id=46 op=UNLOAD Jan 22 00:34:37.934000 audit: BPF prog-id=47 op=UNLOAD Jan 22 00:34:37.935000 audit: BPF prog-id=83 op=LOAD Jan 22 00:34:37.935000 audit: BPF prog-id=59 op=UNLOAD Jan 22 00:34:37.936000 audit: BPF prog-id=84 op=LOAD Jan 22 00:34:37.936000 audit: BPF prog-id=85 op=LOAD Jan 22 00:34:37.936000 audit: BPF prog-id=43 op=UNLOAD Jan 22 00:34:37.936000 audit: BPF prog-id=44 op=UNLOAD Jan 22 00:34:37.937000 audit: BPF prog-id=86 op=LOAD Jan 22 00:34:37.937000 audit: BPF prog-id=58 op=UNLOAD Jan 22 00:34:37.940000 audit: BPF prog-id=87 op=LOAD Jan 22 00:34:37.940000 audit: BPF prog-id=60 op=UNLOAD Jan 22 00:34:37.940000 audit: BPF prog-id=88 op=LOAD Jan 22 00:34:37.940000 audit: BPF prog-id=89 op=LOAD Jan 22 00:34:37.940000 audit: BPF prog-id=61 op=UNLOAD Jan 22 00:34:37.940000 audit: BPF prog-id=62 op=UNLOAD Jan 22 00:34:37.941000 audit: BPF prog-id=90 op=LOAD Jan 22 00:34:37.941000 audit: BPF prog-id=66 op=UNLOAD Jan 22 00:34:37.959183 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 22 00:34:37.959287 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 22 00:34:37.959638 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 22 00:34:37.959686 systemd[1]: kubelet.service: Consumed 139ms CPU time, 98.3M memory peak. Jan 22 00:34:37.958000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 22 00:34:37.961371 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 22 00:34:38.131753 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 22 00:34:38.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:38.143758 (kubelet)[2426]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 22 00:34:38.182541 kubelet[2426]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 22 00:34:38.182541 kubelet[2426]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 22 00:34:38.182541 kubelet[2426]: I0122 00:34:38.182013 2426 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 22 00:34:38.736626 kubelet[2426]: I0122 00:34:38.736582 2426 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 22 00:34:38.736626 kubelet[2426]: I0122 00:34:38.736618 2426 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 22 00:34:38.736756 kubelet[2426]: I0122 00:34:38.736644 2426 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 22 00:34:38.736756 kubelet[2426]: I0122 00:34:38.736655 2426 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 22 00:34:38.736842 kubelet[2426]: I0122 00:34:38.736828 2426 server.go:956] "Client rotation is on, will bootstrap in background" Jan 22 00:34:38.741065 kubelet[2426]: E0122 00:34:38.741029 2426 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.232.4.171:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.232.4.171:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 22 00:34:38.741400 kubelet[2426]: I0122 00:34:38.741283 2426 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 22 00:34:38.745086 kubelet[2426]: I0122 00:34:38.745033 2426 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 22 00:34:38.749396 kubelet[2426]: I0122 00:34:38.749381 2426 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 22 00:34:38.750553 kubelet[2426]: I0122 00:34:38.750314 2426 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 22 00:34:38.750553 kubelet[2426]: I0122 00:34:38.750336 2426 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-232-4-171","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 22 00:34:38.750553 kubelet[2426]: I0122 00:34:38.750454 2426 topology_manager.go:138] "Creating topology manager with none policy" Jan 22 00:34:38.750553 kubelet[2426]: I0122 00:34:38.750462 2426 container_manager_linux.go:306] "Creating device plugin manager" Jan 22 00:34:38.750752 kubelet[2426]: I0122 00:34:38.750572 2426 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 22 00:34:38.752468 kubelet[2426]: I0122 00:34:38.752454 2426 state_mem.go:36] "Initialized new in-memory state store" Jan 22 00:34:38.752667 kubelet[2426]: I0122 00:34:38.752654 2426 kubelet.go:475] "Attempting to sync node with API server" Jan 22 00:34:38.752706 kubelet[2426]: I0122 00:34:38.752674 2426 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 22 00:34:38.753216 kubelet[2426]: E0122 00:34:38.753197 2426 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.232.4.171:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-232-4-171&limit=500&resourceVersion=0\": dial tcp 172.232.4.171:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 22 00:34:38.753251 kubelet[2426]: I0122 00:34:38.753233 2426 kubelet.go:387] "Adding apiserver pod source" Jan 22 00:34:38.753276 kubelet[2426]: I0122 00:34:38.753259 2426 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 22 00:34:38.758201 kubelet[2426]: E0122 00:34:38.758160 2426 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.232.4.171:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.232.4.171:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 22 00:34:38.758290 kubelet[2426]: I0122 00:34:38.758276 2426 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 22 00:34:38.760529 kubelet[2426]: I0122 00:34:38.760490 2426 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 22 00:34:38.760576 kubelet[2426]: I0122 00:34:38.760532 2426 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 22 00:34:38.760601 kubelet[2426]: W0122 00:34:38.760575 2426 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 22 00:34:38.764560 kubelet[2426]: I0122 00:34:38.764023 2426 server.go:1262] "Started kubelet" Jan 22 00:34:38.764560 kubelet[2426]: I0122 00:34:38.764311 2426 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 22 00:34:38.771272 kubelet[2426]: I0122 00:34:38.771256 2426 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 22 00:34:38.771546 kubelet[2426]: I0122 00:34:38.771495 2426 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 22 00:34:38.771590 kubelet[2426]: I0122 00:34:38.771559 2426 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 22 00:34:38.771782 kubelet[2426]: I0122 00:34:38.771759 2426 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 22 00:34:38.775112 kubelet[2426]: I0122 00:34:38.775084 2426 server.go:310] "Adding debug handlers to kubelet server" Jan 22 00:34:38.777014 kubelet[2426]: E0122 00:34:38.775869 2426 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.232.4.171:6443/api/v1/namespaces/default/events\": dial tcp 172.232.4.171:6443: connect: connection refused" event="&Event{ObjectMeta:{172-232-4-171.188ce665d9725d60 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-232-4-171,UID:172-232-4-171,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-232-4-171,},FirstTimestamp:2026-01-22 00:34:38.76399856 +0000 UTC m=+0.615949266,LastTimestamp:2026-01-22 00:34:38.76399856 +0000 UTC m=+0.615949266,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-232-4-171,}" Jan 22 00:34:38.777284 kubelet[2426]: I0122 00:34:38.777263 2426 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 22 00:34:38.780027 kubelet[2426]: I0122 00:34:38.780002 2426 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 22 00:34:38.778000 audit[2441]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2441 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:34:38.780240 kubelet[2426]: E0122 00:34:38.780173 2426 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-232-4-171\" not found" Jan 22 00:34:38.778000 audit[2441]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffeed3c2d30 a2=0 a3=0 items=0 ppid=2426 pid=2441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:38.778000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 22 00:34:38.780740 kubelet[2426]: I0122 00:34:38.780610 2426 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 22 00:34:38.780740 kubelet[2426]: I0122 00:34:38.780657 2426 reconciler.go:29] "Reconciler: start to sync state" Jan 22 00:34:38.781547 kubelet[2426]: E0122 00:34:38.781245 2426 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.232.4.171:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-232-4-171?timeout=10s\": dial tcp 172.232.4.171:6443: connect: connection refused" interval="200ms" Jan 22 00:34:38.781547 kubelet[2426]: E0122 00:34:38.781268 2426 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.232.4.171:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.232.4.171:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 22 00:34:38.782745 kubelet[2426]: I0122 00:34:38.782719 2426 factory.go:223] Registration of the systemd container factory successfully Jan 22 00:34:38.782796 kubelet[2426]: I0122 00:34:38.782786 2426 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 22 00:34:38.780000 audit[2442]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2442 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:34:38.780000 audit[2442]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffffd156690 a2=0 a3=0 items=0 ppid=2426 pid=2442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:38.780000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 22 00:34:38.783936 kubelet[2426]: I0122 00:34:38.783855 2426 factory.go:223] Registration of the containerd container factory successfully Jan 22 00:34:38.784000 audit[2444]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2444 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:34:38.784000 audit[2444]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffdf2f5ef00 a2=0 a3=0 items=0 ppid=2426 pid=2444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:38.784000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 22 00:34:38.788000 audit[2446]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2446 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:34:38.788000 audit[2446]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffc120dc950 a2=0 a3=0 items=0 ppid=2426 pid=2446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:38.788000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 22 00:34:38.796000 audit[2449]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2449 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:34:38.796000 audit[2449]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffefca4f890 a2=0 a3=0 items=0 ppid=2426 pid=2449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:38.796000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F380000002D2D737263003132372E Jan 22 00:34:38.799032 kubelet[2426]: I0122 00:34:38.798925 2426 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 22 00:34:38.798000 audit[2451]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2451 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:34:38.798000 audit[2451]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc9ef221a0 a2=0 a3=0 items=0 ppid=2426 pid=2451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:38.798000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 22 00:34:38.800608 kubelet[2426]: I0122 00:34:38.800583 2426 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 22 00:34:38.800608 kubelet[2426]: I0122 00:34:38.800604 2426 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 22 00:34:38.800673 kubelet[2426]: I0122 00:34:38.800624 2426 kubelet.go:2427] "Starting kubelet main sync loop" Jan 22 00:34:38.800698 kubelet[2426]: E0122 00:34:38.800663 2426 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 22 00:34:38.800000 audit[2452]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2452 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:34:38.800000 audit[2452]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe4c720cf0 a2=0 a3=0 items=0 ppid=2426 pid=2452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:38.800000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 22 00:34:38.801000 audit[2453]: NETFILTER_CFG table=nat:49 family=2 entries=1 op=nft_register_chain pid=2453 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:34:38.801000 audit[2453]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc99f9fc60 a2=0 a3=0 items=0 ppid=2426 pid=2453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:38.801000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 22 00:34:38.802000 audit[2454]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_chain pid=2454 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:34:38.802000 audit[2454]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd591ce470 a2=0 a3=0 items=0 ppid=2426 pid=2454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:38.802000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 22 00:34:38.804000 audit[2455]: NETFILTER_CFG table=mangle:51 family=10 entries=1 op=nft_register_chain pid=2455 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:34:38.804000 audit[2455]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe8998a820 a2=0 a3=0 items=0 ppid=2426 pid=2455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:38.804000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 22 00:34:38.805000 audit[2456]: NETFILTER_CFG table=nat:52 family=10 entries=1 op=nft_register_chain pid=2456 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:34:38.805000 audit[2456]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffd7754040 a2=0 a3=0 items=0 ppid=2426 pid=2456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:38.805000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 22 00:34:38.806000 audit[2457]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2457 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:34:38.806000 audit[2457]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe4f35bd20 a2=0 a3=0 items=0 ppid=2426 pid=2457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:38.806000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 22 00:34:38.809915 kubelet[2426]: E0122 00:34:38.809885 2426 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.232.4.171:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.232.4.171:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 22 00:34:38.811640 kubelet[2426]: E0122 00:34:38.811564 2426 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 22 00:34:38.816832 kubelet[2426]: I0122 00:34:38.816711 2426 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 22 00:34:38.816832 kubelet[2426]: I0122 00:34:38.816749 2426 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 22 00:34:38.816832 kubelet[2426]: I0122 00:34:38.816765 2426 state_mem.go:36] "Initialized new in-memory state store" Jan 22 00:34:38.817990 kubelet[2426]: I0122 00:34:38.817975 2426 policy_none.go:49] "None policy: Start" Jan 22 00:34:38.818038 kubelet[2426]: I0122 00:34:38.817992 2426 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 22 00:34:38.818038 kubelet[2426]: I0122 00:34:38.818004 2426 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 22 00:34:38.819138 kubelet[2426]: I0122 00:34:38.819112 2426 policy_none.go:47] "Start" Jan 22 00:34:38.824143 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 22 00:34:38.843353 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 22 00:34:38.847497 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 22 00:34:38.859543 kubelet[2426]: E0122 00:34:38.859507 2426 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 22 00:34:38.859755 kubelet[2426]: I0122 00:34:38.859743 2426 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 22 00:34:38.859825 kubelet[2426]: I0122 00:34:38.859802 2426 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 22 00:34:38.860188 kubelet[2426]: I0122 00:34:38.860114 2426 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 22 00:34:38.861942 kubelet[2426]: E0122 00:34:38.861927 2426 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 22 00:34:38.862419 kubelet[2426]: E0122 00:34:38.862016 2426 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-232-4-171\" not found" Jan 22 00:34:38.912018 systemd[1]: Created slice kubepods-burstable-pod0cd11350b9cd6b96c6a160c7a6d1b89b.slice - libcontainer container kubepods-burstable-pod0cd11350b9cd6b96c6a160c7a6d1b89b.slice. Jan 22 00:34:38.929732 kubelet[2426]: E0122 00:34:38.929700 2426 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-4-171\" not found" node="172-232-4-171" Jan 22 00:34:38.933644 systemd[1]: Created slice kubepods-burstable-pod9f95bfd63324691327f670865e035878.slice - libcontainer container kubepods-burstable-pod9f95bfd63324691327f670865e035878.slice. Jan 22 00:34:38.936044 kubelet[2426]: E0122 00:34:38.935876 2426 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-4-171\" not found" node="172-232-4-171" Jan 22 00:34:38.944376 systemd[1]: Created slice kubepods-burstable-pod38481fab5631e772b42181c1dae77b6c.slice - libcontainer container kubepods-burstable-pod38481fab5631e772b42181c1dae77b6c.slice. Jan 22 00:34:38.947653 kubelet[2426]: E0122 00:34:38.947634 2426 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-4-171\" not found" node="172-232-4-171" Jan 22 00:34:38.962042 kubelet[2426]: I0122 00:34:38.962026 2426 kubelet_node_status.go:75] "Attempting to register node" node="172-232-4-171" Jan 22 00:34:38.962435 kubelet[2426]: E0122 00:34:38.962414 2426 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.232.4.171:6443/api/v1/nodes\": dial tcp 172.232.4.171:6443: connect: connection refused" node="172-232-4-171" Jan 22 00:34:38.981941 kubelet[2426]: I0122 00:34:38.981911 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9f95bfd63324691327f670865e035878-ca-certs\") pod \"kube-controller-manager-172-232-4-171\" (UID: \"9f95bfd63324691327f670865e035878\") " pod="kube-system/kube-controller-manager-172-232-4-171" Jan 22 00:34:38.982000 kubelet[2426]: I0122 00:34:38.981953 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9f95bfd63324691327f670865e035878-flexvolume-dir\") pod \"kube-controller-manager-172-232-4-171\" (UID: \"9f95bfd63324691327f670865e035878\") " pod="kube-system/kube-controller-manager-172-232-4-171" Jan 22 00:34:38.982000 kubelet[2426]: I0122 00:34:38.981980 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9f95bfd63324691327f670865e035878-k8s-certs\") pod \"kube-controller-manager-172-232-4-171\" (UID: \"9f95bfd63324691327f670865e035878\") " pod="kube-system/kube-controller-manager-172-232-4-171" Jan 22 00:34:38.982062 kubelet[2426]: I0122 00:34:38.982006 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9f95bfd63324691327f670865e035878-kubeconfig\") pod \"kube-controller-manager-172-232-4-171\" (UID: \"9f95bfd63324691327f670865e035878\") " pod="kube-system/kube-controller-manager-172-232-4-171" Jan 22 00:34:38.982062 kubelet[2426]: I0122 00:34:38.982032 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9f95bfd63324691327f670865e035878-usr-share-ca-certificates\") pod \"kube-controller-manager-172-232-4-171\" (UID: \"9f95bfd63324691327f670865e035878\") " pod="kube-system/kube-controller-manager-172-232-4-171" Jan 22 00:34:38.982112 kubelet[2426]: I0122 00:34:38.982059 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/38481fab5631e772b42181c1dae77b6c-kubeconfig\") pod \"kube-scheduler-172-232-4-171\" (UID: \"38481fab5631e772b42181c1dae77b6c\") " pod="kube-system/kube-scheduler-172-232-4-171" Jan 22 00:34:38.982112 kubelet[2426]: I0122 00:34:38.982085 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0cd11350b9cd6b96c6a160c7a6d1b89b-ca-certs\") pod \"kube-apiserver-172-232-4-171\" (UID: \"0cd11350b9cd6b96c6a160c7a6d1b89b\") " pod="kube-system/kube-apiserver-172-232-4-171" Jan 22 00:34:38.982167 kubelet[2426]: I0122 00:34:38.982110 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0cd11350b9cd6b96c6a160c7a6d1b89b-k8s-certs\") pod \"kube-apiserver-172-232-4-171\" (UID: \"0cd11350b9cd6b96c6a160c7a6d1b89b\") " pod="kube-system/kube-apiserver-172-232-4-171" Jan 22 00:34:38.982167 kubelet[2426]: I0122 00:34:38.982145 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0cd11350b9cd6b96c6a160c7a6d1b89b-usr-share-ca-certificates\") pod \"kube-apiserver-172-232-4-171\" (UID: \"0cd11350b9cd6b96c6a160c7a6d1b89b\") " pod="kube-system/kube-apiserver-172-232-4-171" Jan 22 00:34:38.982459 kubelet[2426]: E0122 00:34:38.982426 2426 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.232.4.171:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-232-4-171?timeout=10s\": dial tcp 172.232.4.171:6443: connect: connection refused" interval="400ms" Jan 22 00:34:39.165314 kubelet[2426]: I0122 00:34:39.165218 2426 kubelet_node_status.go:75] "Attempting to register node" node="172-232-4-171" Jan 22 00:34:39.166163 kubelet[2426]: E0122 00:34:39.166021 2426 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.232.4.171:6443/api/v1/nodes\": dial tcp 172.232.4.171:6443: connect: connection refused" node="172-232-4-171" Jan 22 00:34:39.232200 kubelet[2426]: E0122 00:34:39.232083 2426 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 22 00:34:39.232857 containerd[1629]: time="2026-01-22T00:34:39.232824181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-232-4-171,Uid:0cd11350b9cd6b96c6a160c7a6d1b89b,Namespace:kube-system,Attempt:0,}" Jan 22 00:34:39.237618 kubelet[2426]: E0122 00:34:39.237590 2426 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 22 00:34:39.237991 containerd[1629]: time="2026-01-22T00:34:39.237921857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-232-4-171,Uid:9f95bfd63324691327f670865e035878,Namespace:kube-system,Attempt:0,}" Jan 22 00:34:39.249959 kubelet[2426]: E0122 00:34:39.249869 2426 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 22 00:34:39.250459 containerd[1629]: time="2026-01-22T00:34:39.250326937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-232-4-171,Uid:38481fab5631e772b42181c1dae77b6c,Namespace:kube-system,Attempt:0,}" Jan 22 00:34:39.383274 kubelet[2426]: E0122 00:34:39.383232 2426 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.232.4.171:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-232-4-171?timeout=10s\": dial tcp 172.232.4.171:6443: connect: connection refused" interval="800ms" Jan 22 00:34:39.569078 kubelet[2426]: I0122 00:34:39.568965 2426 kubelet_node_status.go:75] "Attempting to register node" node="172-232-4-171" Jan 22 00:34:39.569537 kubelet[2426]: E0122 00:34:39.569470 2426 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.232.4.171:6443/api/v1/nodes\": dial tcp 172.232.4.171:6443: connect: connection refused" node="172-232-4-171" Jan 22 00:34:39.797723 kubelet[2426]: E0122 00:34:39.797650 2426 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.232.4.171:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-232-4-171&limit=500&resourceVersion=0\": dial tcp 172.232.4.171:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 22 00:34:40.161216 kubelet[2426]: E0122 00:34:40.161174 2426 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.232.4.171:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.232.4.171:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 22 00:34:40.184274 kubelet[2426]: E0122 00:34:40.184211 2426 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.232.4.171:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-232-4-171?timeout=10s\": dial tcp 172.232.4.171:6443: connect: connection refused" interval="1.6s" Jan 22 00:34:40.255326 kubelet[2426]: E0122 00:34:40.255222 2426 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.232.4.171:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.232.4.171:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 22 00:34:40.257267 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1287038477.mount: Deactivated successfully. Jan 22 00:34:40.261348 containerd[1629]: time="2026-01-22T00:34:40.261278110Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 22 00:34:40.263330 containerd[1629]: time="2026-01-22T00:34:40.263297367Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 22 00:34:40.264044 containerd[1629]: time="2026-01-22T00:34:40.264016660Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 22 00:34:40.264629 containerd[1629]: time="2026-01-22T00:34:40.264491487Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 22 00:34:40.265657 containerd[1629]: time="2026-01-22T00:34:40.265630576Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 22 00:34:40.266207 containerd[1629]: time="2026-01-22T00:34:40.266189382Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 22 00:34:40.266879 containerd[1629]: time="2026-01-22T00:34:40.266843397Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 22 00:34:40.267433 containerd[1629]: time="2026-01-22T00:34:40.267394330Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 22 00:34:40.268205 containerd[1629]: time="2026-01-22T00:34:40.267895931Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.033282132s" Jan 22 00:34:40.270584 containerd[1629]: time="2026-01-22T00:34:40.270564225Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.031187503s" Jan 22 00:34:40.275031 kubelet[2426]: E0122 00:34:40.275008 2426 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.232.4.171:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.232.4.171:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 22 00:34:40.277138 containerd[1629]: time="2026-01-22T00:34:40.276375861Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.024773139s" Jan 22 00:34:40.296431 containerd[1629]: time="2026-01-22T00:34:40.296386107Z" level=info msg="connecting to shim 4bae187cab0bf8c3883c95fba31224e4879e6e1c1b3b8d96c32fe837217362d6" address="unix:///run/containerd/s/db8303946e186445336776c5d71ee47f3ae39adf00d3c6efe01b4cbfbb2bd172" namespace=k8s.io protocol=ttrpc version=3 Jan 22 00:34:40.301889 containerd[1629]: time="2026-01-22T00:34:40.301864150Z" level=info msg="connecting to shim 15bc8a09c39821a790fd1e38f3fc571ffe3ca369eb25e0f40dbdc1b3a0c81330" address="unix:///run/containerd/s/cae0ea7cfbeb7265887966b6427eada53f5984dcb458132c7b45e37b1cf6aa68" namespace=k8s.io protocol=ttrpc version=3 Jan 22 00:34:40.316883 containerd[1629]: time="2026-01-22T00:34:40.316848674Z" level=info msg="connecting to shim 32081ca9487355d3d788f604f33477d02a8c4769acf33397d73d0d3d5419ad5b" address="unix:///run/containerd/s/32a199f62b82b4a77371ffa1bc1158f8e04a959ea92075fa49d0919c367c0cb4" namespace=k8s.io protocol=ttrpc version=3 Jan 22 00:34:40.338732 systemd[1]: Started cri-containerd-4bae187cab0bf8c3883c95fba31224e4879e6e1c1b3b8d96c32fe837217362d6.scope - libcontainer container 4bae187cab0bf8c3883c95fba31224e4879e6e1c1b3b8d96c32fe837217362d6. Jan 22 00:34:40.366671 systemd[1]: Started cri-containerd-32081ca9487355d3d788f604f33477d02a8c4769acf33397d73d0d3d5419ad5b.scope - libcontainer container 32081ca9487355d3d788f604f33477d02a8c4769acf33397d73d0d3d5419ad5b. Jan 22 00:34:40.371871 systemd[1]: Started cri-containerd-15bc8a09c39821a790fd1e38f3fc571ffe3ca369eb25e0f40dbdc1b3a0c81330.scope - libcontainer container 15bc8a09c39821a790fd1e38f3fc571ffe3ca369eb25e0f40dbdc1b3a0c81330. Jan 22 00:34:40.375128 kubelet[2426]: I0122 00:34:40.375109 2426 kubelet_node_status.go:75] "Attempting to register node" node="172-232-4-171" Jan 22 00:34:40.373000 audit: BPF prog-id=91 op=LOAD Jan 22 00:34:40.375690 kubelet[2426]: E0122 00:34:40.375671 2426 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.232.4.171:6443/api/v1/nodes\": dial tcp 172.232.4.171:6443: connect: connection refused" node="172-232-4-171" Jan 22 00:34:40.375000 audit: BPF prog-id=92 op=LOAD Jan 22 00:34:40.375000 audit[2497]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2474 pid=2497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:40.375000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462616531383763616230626638633338383363393566626133313232 Jan 22 00:34:40.375000 audit: BPF prog-id=92 op=UNLOAD Jan 22 00:34:40.375000 audit[2497]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2474 pid=2497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:40.375000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462616531383763616230626638633338383363393566626133313232 Jan 22 00:34:40.376000 audit: BPF prog-id=93 op=LOAD Jan 22 00:34:40.376000 audit[2497]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2474 pid=2497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:40.376000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462616531383763616230626638633338383363393566626133313232 Jan 22 00:34:40.376000 audit: BPF prog-id=94 op=LOAD Jan 22 00:34:40.376000 audit[2497]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2474 pid=2497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:40.376000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462616531383763616230626638633338383363393566626133313232 Jan 22 00:34:40.376000 audit: BPF prog-id=94 op=UNLOAD Jan 22 00:34:40.376000 audit[2497]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2474 pid=2497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:40.376000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462616531383763616230626638633338383363393566626133313232 Jan 22 00:34:40.376000 audit: BPF prog-id=93 op=UNLOAD Jan 22 00:34:40.376000 audit[2497]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2474 pid=2497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:40.376000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462616531383763616230626638633338383363393566626133313232 Jan 22 00:34:40.376000 audit: BPF prog-id=95 op=LOAD Jan 22 00:34:40.376000 audit[2497]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2474 pid=2497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:40.376000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462616531383763616230626638633338383363393566626133313232 Jan 22 00:34:40.381000 audit: BPF prog-id=96 op=LOAD Jan 22 00:34:40.382000 audit: BPF prog-id=97 op=LOAD Jan 22 00:34:40.382000 audit[2542]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2512 pid=2542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:40.382000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332303831636139343837333535643364373838663630346633333437 Jan 22 00:34:40.382000 audit: BPF prog-id=97 op=UNLOAD Jan 22 00:34:40.382000 audit[2542]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2512 pid=2542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:40.382000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332303831636139343837333535643364373838663630346633333437 Jan 22 00:34:40.382000 audit: BPF prog-id=98 op=LOAD Jan 22 00:34:40.382000 audit[2542]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2512 pid=2542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:40.382000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332303831636139343837333535643364373838663630346633333437 Jan 22 00:34:40.382000 audit: BPF prog-id=99 op=LOAD Jan 22 00:34:40.382000 audit[2542]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2512 pid=2542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:40.382000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332303831636139343837333535643364373838663630346633333437 Jan 22 00:34:40.382000 audit: BPF prog-id=99 op=UNLOAD Jan 22 00:34:40.382000 audit[2542]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2512 pid=2542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:40.382000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332303831636139343837333535643364373838663630346633333437 Jan 22 00:34:40.382000 audit: BPF prog-id=98 op=UNLOAD Jan 22 00:34:40.382000 audit[2542]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2512 pid=2542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:40.382000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332303831636139343837333535643364373838663630346633333437 Jan 22 00:34:40.382000 audit: BPF prog-id=100 op=LOAD Jan 22 00:34:40.382000 audit[2542]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2512 pid=2542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:40.382000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332303831636139343837333535643364373838663630346633333437 Jan 22 00:34:40.414000 audit: BPF prog-id=101 op=LOAD Jan 22 00:34:40.415000 audit: BPF prog-id=102 op=LOAD Jan 22 00:34:40.415000 audit[2526]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=2493 pid=2526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:40.415000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135626338613039633339383231613739306664316533386633666335 Jan 22 00:34:40.415000 audit: BPF prog-id=102 op=UNLOAD Jan 22 00:34:40.415000 audit[2526]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2493 pid=2526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:40.415000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135626338613039633339383231613739306664316533386633666335 Jan 22 00:34:40.415000 audit: BPF prog-id=103 op=LOAD Jan 22 00:34:40.415000 audit[2526]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=2493 pid=2526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:40.415000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135626338613039633339383231613739306664316533386633666335 Jan 22 00:34:40.416000 audit: BPF prog-id=104 op=LOAD Jan 22 00:34:40.416000 audit[2526]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=2493 pid=2526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:40.416000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135626338613039633339383231613739306664316533386633666335 Jan 22 00:34:40.416000 audit: BPF prog-id=104 op=UNLOAD Jan 22 00:34:40.416000 audit[2526]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2493 pid=2526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:40.416000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135626338613039633339383231613739306664316533386633666335 Jan 22 00:34:40.416000 audit: BPF prog-id=103 op=UNLOAD Jan 22 00:34:40.416000 audit[2526]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2493 pid=2526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:40.416000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135626338613039633339383231613739306664316533386633666335 Jan 22 00:34:40.416000 audit: BPF prog-id=105 op=LOAD Jan 22 00:34:40.416000 audit[2526]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=2493 pid=2526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:40.416000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135626338613039633339383231613739306664316533386633666335 Jan 22 00:34:40.436775 containerd[1629]: time="2026-01-22T00:34:40.436471715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-232-4-171,Uid:9f95bfd63324691327f670865e035878,Namespace:kube-system,Attempt:0,} returns sandbox id \"32081ca9487355d3d788f604f33477d02a8c4769acf33397d73d0d3d5419ad5b\"" Jan 22 00:34:40.439047 kubelet[2426]: E0122 00:34:40.438879 2426 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 22 00:34:40.447334 containerd[1629]: time="2026-01-22T00:34:40.447309574Z" level=info msg="CreateContainer within sandbox \"32081ca9487355d3d788f604f33477d02a8c4769acf33397d73d0d3d5419ad5b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 22 00:34:40.450091 containerd[1629]: time="2026-01-22T00:34:40.449957174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-232-4-171,Uid:0cd11350b9cd6b96c6a160c7a6d1b89b,Namespace:kube-system,Attempt:0,} returns sandbox id \"4bae187cab0bf8c3883c95fba31224e4879e6e1c1b3b8d96c32fe837217362d6\"" Jan 22 00:34:40.450712 kubelet[2426]: E0122 00:34:40.450698 2426 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 22 00:34:40.456112 containerd[1629]: time="2026-01-22T00:34:40.455609054Z" level=info msg="CreateContainer within sandbox \"4bae187cab0bf8c3883c95fba31224e4879e6e1c1b3b8d96c32fe837217362d6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 22 00:34:40.457188 containerd[1629]: time="2026-01-22T00:34:40.457170023Z" level=info msg="Container 615465ed2109707c632d67323930e012f3e7b49f6e9aee8010a468c35810f33d: CDI devices from CRI Config.CDIDevices: []" Jan 22 00:34:40.471360 containerd[1629]: time="2026-01-22T00:34:40.470748034Z" level=info msg="Container 7790686a17c3861e72b59f53a2a729c277789ded4df059fc2f227d07c2d2dee8: CDI devices from CRI Config.CDIDevices: []" Jan 22 00:34:40.471805 containerd[1629]: time="2026-01-22T00:34:40.471326474Z" level=info msg="CreateContainer within sandbox \"32081ca9487355d3d788f604f33477d02a8c4769acf33397d73d0d3d5419ad5b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"615465ed2109707c632d67323930e012f3e7b49f6e9aee8010a468c35810f33d\"" Jan 22 00:34:40.473591 containerd[1629]: time="2026-01-22T00:34:40.472699329Z" level=info msg="StartContainer for \"615465ed2109707c632d67323930e012f3e7b49f6e9aee8010a468c35810f33d\"" Jan 22 00:34:40.473591 containerd[1629]: time="2026-01-22T00:34:40.473543229Z" level=info msg="connecting to shim 615465ed2109707c632d67323930e012f3e7b49f6e9aee8010a468c35810f33d" address="unix:///run/containerd/s/32a199f62b82b4a77371ffa1bc1158f8e04a959ea92075fa49d0919c367c0cb4" protocol=ttrpc version=3 Jan 22 00:34:40.478966 containerd[1629]: time="2026-01-22T00:34:40.478946166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-232-4-171,Uid:38481fab5631e772b42181c1dae77b6c,Namespace:kube-system,Attempt:0,} returns sandbox id \"15bc8a09c39821a790fd1e38f3fc571ffe3ca369eb25e0f40dbdc1b3a0c81330\"" Jan 22 00:34:40.479192 containerd[1629]: time="2026-01-22T00:34:40.479173013Z" level=info msg="CreateContainer within sandbox \"4bae187cab0bf8c3883c95fba31224e4879e6e1c1b3b8d96c32fe837217362d6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7790686a17c3861e72b59f53a2a729c277789ded4df059fc2f227d07c2d2dee8\"" Jan 22 00:34:40.479740 kubelet[2426]: E0122 00:34:40.479722 2426 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 22 00:34:40.480352 containerd[1629]: time="2026-01-22T00:34:40.480302655Z" level=info msg="StartContainer for \"7790686a17c3861e72b59f53a2a729c277789ded4df059fc2f227d07c2d2dee8\"" Jan 22 00:34:40.482471 containerd[1629]: time="2026-01-22T00:34:40.482394984Z" level=info msg="connecting to shim 7790686a17c3861e72b59f53a2a729c277789ded4df059fc2f227d07c2d2dee8" address="unix:///run/containerd/s/db8303946e186445336776c5d71ee47f3ae39adf00d3c6efe01b4cbfbb2bd172" protocol=ttrpc version=3 Jan 22 00:34:40.482840 containerd[1629]: time="2026-01-22T00:34:40.482820990Z" level=info msg="CreateContainer within sandbox \"15bc8a09c39821a790fd1e38f3fc571ffe3ca369eb25e0f40dbdc1b3a0c81330\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 22 00:34:40.493460 containerd[1629]: time="2026-01-22T00:34:40.493024578Z" level=info msg="Container 6aa554d67652a2777d74d8c036a0a76849e8229e877c2c0622cf65cfc8dc9b27: CDI devices from CRI Config.CDIDevices: []" Jan 22 00:34:40.500675 systemd[1]: Started cri-containerd-615465ed2109707c632d67323930e012f3e7b49f6e9aee8010a468c35810f33d.scope - libcontainer container 615465ed2109707c632d67323930e012f3e7b49f6e9aee8010a468c35810f33d. Jan 22 00:34:40.501395 containerd[1629]: time="2026-01-22T00:34:40.501375173Z" level=info msg="CreateContainer within sandbox \"15bc8a09c39821a790fd1e38f3fc571ffe3ca369eb25e0f40dbdc1b3a0c81330\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6aa554d67652a2777d74d8c036a0a76849e8229e877c2c0622cf65cfc8dc9b27\"" Jan 22 00:34:40.502580 containerd[1629]: time="2026-01-22T00:34:40.502547577Z" level=info msg="StartContainer for \"6aa554d67652a2777d74d8c036a0a76849e8229e877c2c0622cf65cfc8dc9b27\"" Jan 22 00:34:40.503885 containerd[1629]: time="2026-01-22T00:34:40.503832598Z" level=info msg="connecting to shim 6aa554d67652a2777d74d8c036a0a76849e8229e877c2c0622cf65cfc8dc9b27" address="unix:///run/containerd/s/cae0ea7cfbeb7265887966b6427eada53f5984dcb458132c7b45e37b1cf6aa68" protocol=ttrpc version=3 Jan 22 00:34:40.519795 systemd[1]: Started cri-containerd-7790686a17c3861e72b59f53a2a729c277789ded4df059fc2f227d07c2d2dee8.scope - libcontainer container 7790686a17c3861e72b59f53a2a729c277789ded4df059fc2f227d07c2d2dee8. Jan 22 00:34:40.533847 systemd[1]: Started cri-containerd-6aa554d67652a2777d74d8c036a0a76849e8229e877c2c0622cf65cfc8dc9b27.scope - libcontainer container 6aa554d67652a2777d74d8c036a0a76849e8229e877c2c0622cf65cfc8dc9b27. Jan 22 00:34:40.536000 audit: BPF prog-id=106 op=LOAD Jan 22 00:34:40.536000 audit: BPF prog-id=107 op=LOAD Jan 22 00:34:40.536000 audit[2603]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=2512 pid=2603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:40.536000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631353436356564323130393730376336333264363733323339333065 Jan 22 00:34:40.536000 audit: BPF prog-id=107 op=UNLOAD Jan 22 00:34:40.536000 audit[2603]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2512 pid=2603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:40.536000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631353436356564323130393730376336333264363733323339333065 Jan 22 00:34:40.537000 audit: BPF prog-id=108 op=LOAD Jan 22 00:34:40.537000 audit[2603]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=2512 pid=2603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:40.537000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631353436356564323130393730376336333264363733323339333065 Jan 22 00:34:40.537000 audit: BPF prog-id=109 op=LOAD Jan 22 00:34:40.537000 audit[2603]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=2512 pid=2603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:40.537000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631353436356564323130393730376336333264363733323339333065 Jan 22 00:34:40.537000 audit: BPF prog-id=109 op=UNLOAD Jan 22 00:34:40.537000 audit[2603]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2512 pid=2603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:40.537000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631353436356564323130393730376336333264363733323339333065 Jan 22 00:34:40.537000 audit: BPF prog-id=108 op=UNLOAD Jan 22 00:34:40.537000 audit[2603]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2512 pid=2603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:40.537000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631353436356564323130393730376336333264363733323339333065 Jan 22 00:34:40.537000 audit: BPF prog-id=110 op=LOAD Jan 22 00:34:40.537000 audit[2603]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=2512 pid=2603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:40.537000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631353436356564323130393730376336333264363733323339333065 Jan 22 00:34:40.543000 audit: BPF prog-id=111 op=LOAD Jan 22 00:34:40.544000 audit: BPF prog-id=112 op=LOAD Jan 22 00:34:40.544000 audit[2609]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2474 pid=2609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:40.544000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737393036383661313763333836316537326235396635336132613732 Jan 22 00:34:40.544000 audit: BPF prog-id=112 op=UNLOAD Jan 22 00:34:40.544000 audit[2609]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2474 pid=2609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:40.544000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737393036383661313763333836316537326235396635336132613732 Jan 22 00:34:40.544000 audit: BPF prog-id=113 op=LOAD Jan 22 00:34:40.544000 audit[2609]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2474 pid=2609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:40.544000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737393036383661313763333836316537326235396635336132613732 Jan 22 00:34:40.544000 audit: BPF prog-id=114 op=LOAD Jan 22 00:34:40.544000 audit[2609]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2474 pid=2609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:40.544000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737393036383661313763333836316537326235396635336132613732 Jan 22 00:34:40.545000 audit: BPF prog-id=114 op=UNLOAD Jan 22 00:34:40.545000 audit[2609]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2474 pid=2609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:40.545000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737393036383661313763333836316537326235396635336132613732 Jan 22 00:34:40.545000 audit: BPF prog-id=113 op=UNLOAD Jan 22 00:34:40.545000 audit[2609]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2474 pid=2609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:40.545000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737393036383661313763333836316537326235396635336132613732 Jan 22 00:34:40.545000 audit: BPF prog-id=115 op=LOAD Jan 22 00:34:40.545000 audit[2609]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2474 pid=2609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:40.545000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737393036383661313763333836316537326235396635336132613732 Jan 22 00:34:40.565000 audit: BPF prog-id=116 op=LOAD Jan 22 00:34:40.566000 audit: BPF prog-id=117 op=LOAD Jan 22 00:34:40.566000 audit[2626]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00018c238 a2=98 a3=0 items=0 ppid=2493 pid=2626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:40.566000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661613535346436373635326132373737643734643863303336613061 Jan 22 00:34:40.566000 audit: BPF prog-id=117 op=UNLOAD Jan 22 00:34:40.566000 audit[2626]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2493 pid=2626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:40.566000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661613535346436373635326132373737643734643863303336613061 Jan 22 00:34:40.567000 audit: BPF prog-id=118 op=LOAD Jan 22 00:34:40.567000 audit[2626]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00018c488 a2=98 a3=0 items=0 ppid=2493 pid=2626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:40.567000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661613535346436373635326132373737643734643863303336613061 Jan 22 00:34:40.567000 audit: BPF prog-id=119 op=LOAD Jan 22 00:34:40.567000 audit[2626]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00018c218 a2=98 a3=0 items=0 ppid=2493 pid=2626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:40.567000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661613535346436373635326132373737643734643863303336613061 Jan 22 00:34:40.567000 audit: BPF prog-id=119 op=UNLOAD Jan 22 00:34:40.567000 audit[2626]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2493 pid=2626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:40.567000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661613535346436373635326132373737643734643863303336613061 Jan 22 00:34:40.567000 audit: BPF prog-id=118 op=UNLOAD Jan 22 00:34:40.567000 audit[2626]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2493 pid=2626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:40.567000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661613535346436373635326132373737643734643863303336613061 Jan 22 00:34:40.567000 audit: BPF prog-id=120 op=LOAD Jan 22 00:34:40.567000 audit[2626]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00018c6e8 a2=98 a3=0 items=0 ppid=2493 pid=2626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:40.567000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661613535346436373635326132373737643734643863303336613061 Jan 22 00:34:40.597697 containerd[1629]: time="2026-01-22T00:34:40.597669356Z" level=info msg="StartContainer for \"615465ed2109707c632d67323930e012f3e7b49f6e9aee8010a468c35810f33d\" returns successfully" Jan 22 00:34:40.634204 containerd[1629]: time="2026-01-22T00:34:40.634124047Z" level=info msg="StartContainer for \"7790686a17c3861e72b59f53a2a729c277789ded4df059fc2f227d07c2d2dee8\" returns successfully" Jan 22 00:34:40.648902 containerd[1629]: time="2026-01-22T00:34:40.648873060Z" level=info msg="StartContainer for \"6aa554d67652a2777d74d8c036a0a76849e8229e877c2c0622cf65cfc8dc9b27\" returns successfully" Jan 22 00:34:40.825091 kubelet[2426]: E0122 00:34:40.824272 2426 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-4-171\" not found" node="172-232-4-171" Jan 22 00:34:40.825091 kubelet[2426]: E0122 00:34:40.824548 2426 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 22 00:34:40.827636 kubelet[2426]: E0122 00:34:40.827619 2426 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-4-171\" not found" node="172-232-4-171" Jan 22 00:34:40.828208 kubelet[2426]: E0122 00:34:40.828194 2426 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 22 00:34:40.830528 kubelet[2426]: E0122 00:34:40.828778 2426 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-4-171\" not found" node="172-232-4-171" Jan 22 00:34:40.830904 kubelet[2426]: E0122 00:34:40.830892 2426 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 22 00:34:41.832823 kubelet[2426]: E0122 00:34:41.832783 2426 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-4-171\" not found" node="172-232-4-171" Jan 22 00:34:41.833231 kubelet[2426]: E0122 00:34:41.832907 2426 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 22 00:34:41.833231 kubelet[2426]: E0122 00:34:41.833111 2426 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-4-171\" not found" node="172-232-4-171" Jan 22 00:34:41.833231 kubelet[2426]: E0122 00:34:41.833192 2426 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 22 00:34:41.833388 kubelet[2426]: E0122 00:34:41.833364 2426 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-4-171\" not found" node="172-232-4-171" Jan 22 00:34:41.833528 kubelet[2426]: E0122 00:34:41.833489 2426 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 22 00:34:41.979526 kubelet[2426]: I0122 00:34:41.979479 2426 kubelet_node_status.go:75] "Attempting to register node" node="172-232-4-171" Jan 22 00:34:42.831047 kubelet[2426]: E0122 00:34:42.830991 2426 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-232-4-171\" not found" node="172-232-4-171" Jan 22 00:34:42.872593 kubelet[2426]: I0122 00:34:42.872547 2426 kubelet_node_status.go:78] "Successfully registered node" node="172-232-4-171" Jan 22 00:34:42.880834 kubelet[2426]: I0122 00:34:42.880753 2426 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-232-4-171" Jan 22 00:34:42.947938 kubelet[2426]: E0122 00:34:42.946419 2426 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-232-4-171\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-232-4-171" Jan 22 00:34:42.947938 kubelet[2426]: I0122 00:34:42.947728 2426 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-232-4-171" Jan 22 00:34:42.950032 kubelet[2426]: E0122 00:34:42.950001 2426 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-232-4-171\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-232-4-171" Jan 22 00:34:42.950122 kubelet[2426]: I0122 00:34:42.950110 2426 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-232-4-171" Jan 22 00:34:42.952502 kubelet[2426]: E0122 00:34:42.952485 2426 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-232-4-171\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-172-232-4-171" Jan 22 00:34:43.200787 kubelet[2426]: I0122 00:34:43.200687 2426 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-232-4-171" Jan 22 00:34:43.202471 kubelet[2426]: E0122 00:34:43.202452 2426 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-232-4-171\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-232-4-171" Jan 22 00:34:43.202628 kubelet[2426]: E0122 00:34:43.202610 2426 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 22 00:34:43.761429 kubelet[2426]: I0122 00:34:43.761390 2426 apiserver.go:52] "Watching apiserver" Jan 22 00:34:43.781036 kubelet[2426]: I0122 00:34:43.781007 2426 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 22 00:34:44.592819 systemd[1]: Reload requested from client PID 2707 ('systemctl') (unit session-7.scope)... Jan 22 00:34:44.592841 systemd[1]: Reloading... Jan 22 00:34:44.724809 zram_generator::config[2757]: No configuration found. Jan 22 00:34:44.976723 systemd[1]: Reloading finished in 383 ms. Jan 22 00:34:45.014288 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 22 00:34:45.036346 systemd[1]: kubelet.service: Deactivated successfully. Jan 22 00:34:45.036820 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 22 00:34:45.036000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:45.037698 systemd[1]: kubelet.service: Consumed 1.009s CPU time, 125.5M memory peak. Jan 22 00:34:45.039574 kernel: kauditd_printk_skb: 210 callbacks suppressed Jan 22 00:34:45.039642 kernel: audit: type=1131 audit(1769042085.036:401): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:45.045278 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 22 00:34:45.047000 audit: BPF prog-id=121 op=LOAD Jan 22 00:34:45.047000 audit: BPF prog-id=67 op=UNLOAD Jan 22 00:34:45.047000 audit: BPF prog-id=122 op=LOAD Jan 22 00:34:45.047000 audit: BPF prog-id=123 op=LOAD Jan 22 00:34:45.047000 audit: BPF prog-id=68 op=UNLOAD Jan 22 00:34:45.047000 audit: BPF prog-id=69 op=UNLOAD Jan 22 00:34:45.051543 kernel: audit: type=1334 audit(1769042085.047:402): prog-id=121 op=LOAD Jan 22 00:34:45.051629 kernel: audit: type=1334 audit(1769042085.047:403): prog-id=67 op=UNLOAD Jan 22 00:34:45.051653 kernel: audit: type=1334 audit(1769042085.047:404): prog-id=122 op=LOAD Jan 22 00:34:45.051676 kernel: audit: type=1334 audit(1769042085.047:405): prog-id=123 op=LOAD Jan 22 00:34:45.051695 kernel: audit: type=1334 audit(1769042085.047:406): prog-id=68 op=UNLOAD Jan 22 00:34:45.051714 kernel: audit: type=1334 audit(1769042085.047:407): prog-id=69 op=UNLOAD Jan 22 00:34:45.051000 audit: BPF prog-id=124 op=LOAD Jan 22 00:34:45.051000 audit: BPF prog-id=90 op=UNLOAD Jan 22 00:34:45.069304 kernel: audit: type=1334 audit(1769042085.051:408): prog-id=124 op=LOAD Jan 22 00:34:45.069366 kernel: audit: type=1334 audit(1769042085.051:409): prog-id=90 op=UNLOAD Jan 22 00:34:45.069390 kernel: audit: type=1334 audit(1769042085.054:410): prog-id=125 op=LOAD Jan 22 00:34:45.054000 audit: BPF prog-id=125 op=LOAD Jan 22 00:34:45.074000 audit: BPF prog-id=74 op=UNLOAD Jan 22 00:34:45.074000 audit: BPF prog-id=126 op=LOAD Jan 22 00:34:45.074000 audit: BPF prog-id=127 op=LOAD Jan 22 00:34:45.074000 audit: BPF prog-id=75 op=UNLOAD Jan 22 00:34:45.074000 audit: BPF prog-id=76 op=UNLOAD Jan 22 00:34:45.076000 audit: BPF prog-id=128 op=LOAD Jan 22 00:34:45.076000 audit: BPF prog-id=71 op=UNLOAD Jan 22 00:34:45.076000 audit: BPF prog-id=129 op=LOAD Jan 22 00:34:45.076000 audit: BPF prog-id=130 op=LOAD Jan 22 00:34:45.076000 audit: BPF prog-id=72 op=UNLOAD Jan 22 00:34:45.076000 audit: BPF prog-id=73 op=UNLOAD Jan 22 00:34:45.077000 audit: BPF prog-id=131 op=LOAD Jan 22 00:34:45.077000 audit: BPF prog-id=132 op=LOAD Jan 22 00:34:45.077000 audit: BPF prog-id=84 op=UNLOAD Jan 22 00:34:45.077000 audit: BPF prog-id=85 op=UNLOAD Jan 22 00:34:45.078000 audit: BPF prog-id=133 op=LOAD Jan 22 00:34:45.078000 audit: BPF prog-id=86 op=UNLOAD Jan 22 00:34:45.079000 audit: BPF prog-id=134 op=LOAD Jan 22 00:34:45.079000 audit: BPF prog-id=80 op=UNLOAD Jan 22 00:34:45.079000 audit: BPF prog-id=135 op=LOAD Jan 22 00:34:45.079000 audit: BPF prog-id=136 op=LOAD Jan 22 00:34:45.079000 audit: BPF prog-id=81 op=UNLOAD Jan 22 00:34:45.079000 audit: BPF prog-id=82 op=UNLOAD Jan 22 00:34:45.080000 audit: BPF prog-id=137 op=LOAD Jan 22 00:34:45.080000 audit: BPF prog-id=83 op=UNLOAD Jan 22 00:34:45.081000 audit: BPF prog-id=138 op=LOAD Jan 22 00:34:45.081000 audit: BPF prog-id=77 op=UNLOAD Jan 22 00:34:45.081000 audit: BPF prog-id=139 op=LOAD Jan 22 00:34:45.081000 audit: BPF prog-id=140 op=LOAD Jan 22 00:34:45.081000 audit: BPF prog-id=78 op=UNLOAD Jan 22 00:34:45.081000 audit: BPF prog-id=79 op=UNLOAD Jan 22 00:34:45.083000 audit: BPF prog-id=141 op=LOAD Jan 22 00:34:45.083000 audit: BPF prog-id=87 op=UNLOAD Jan 22 00:34:45.083000 audit: BPF prog-id=142 op=LOAD Jan 22 00:34:45.083000 audit: BPF prog-id=143 op=LOAD Jan 22 00:34:45.083000 audit: BPF prog-id=88 op=UNLOAD Jan 22 00:34:45.083000 audit: BPF prog-id=89 op=UNLOAD Jan 22 00:34:45.084000 audit: BPF prog-id=144 op=LOAD Jan 22 00:34:45.084000 audit: BPF prog-id=70 op=UNLOAD Jan 22 00:34:45.260895 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 22 00:34:45.259000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:45.271932 (kubelet)[2805]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 22 00:34:45.328594 kubelet[2805]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 22 00:34:45.328594 kubelet[2805]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 22 00:34:45.328594 kubelet[2805]: I0122 00:34:45.327927 2805 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 22 00:34:45.335219 kubelet[2805]: I0122 00:34:45.335160 2805 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 22 00:34:45.335219 kubelet[2805]: I0122 00:34:45.335183 2805 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 22 00:34:45.335219 kubelet[2805]: I0122 00:34:45.335209 2805 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 22 00:34:45.335219 kubelet[2805]: I0122 00:34:45.335221 2805 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 22 00:34:45.335447 kubelet[2805]: I0122 00:34:45.335406 2805 server.go:956] "Client rotation is on, will bootstrap in background" Jan 22 00:34:45.336690 kubelet[2805]: I0122 00:34:45.336669 2805 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 22 00:34:45.342166 kubelet[2805]: I0122 00:34:45.341546 2805 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 22 00:34:45.346498 kubelet[2805]: I0122 00:34:45.346468 2805 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 22 00:34:45.350845 kubelet[2805]: I0122 00:34:45.350820 2805 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 22 00:34:45.351106 kubelet[2805]: I0122 00:34:45.351066 2805 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 22 00:34:45.351290 kubelet[2805]: I0122 00:34:45.351094 2805 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-232-4-171","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 22 00:34:45.351290 kubelet[2805]: I0122 00:34:45.351263 2805 topology_manager.go:138] "Creating topology manager with none policy" Jan 22 00:34:45.351290 kubelet[2805]: I0122 00:34:45.351273 2805 container_manager_linux.go:306] "Creating device plugin manager" Jan 22 00:34:45.351290 kubelet[2805]: I0122 00:34:45.351295 2805 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 22 00:34:45.352055 kubelet[2805]: I0122 00:34:45.352034 2805 state_mem.go:36] "Initialized new in-memory state store" Jan 22 00:34:45.352249 kubelet[2805]: I0122 00:34:45.352228 2805 kubelet.go:475] "Attempting to sync node with API server" Jan 22 00:34:45.352249 kubelet[2805]: I0122 00:34:45.352246 2805 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 22 00:34:45.352320 kubelet[2805]: I0122 00:34:45.352289 2805 kubelet.go:387] "Adding apiserver pod source" Jan 22 00:34:45.352320 kubelet[2805]: I0122 00:34:45.352312 2805 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 22 00:34:45.359900 kubelet[2805]: I0122 00:34:45.358810 2805 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 22 00:34:45.361009 kubelet[2805]: I0122 00:34:45.360696 2805 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 22 00:34:45.361009 kubelet[2805]: I0122 00:34:45.360727 2805 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 22 00:34:45.366809 kubelet[2805]: I0122 00:34:45.366749 2805 server.go:1262] "Started kubelet" Jan 22 00:34:45.367765 kubelet[2805]: I0122 00:34:45.367167 2805 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 22 00:34:45.368466 kubelet[2805]: I0122 00:34:45.368397 2805 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 22 00:34:45.368705 kubelet[2805]: I0122 00:34:45.368686 2805 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 22 00:34:45.369041 kubelet[2805]: I0122 00:34:45.369004 2805 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 22 00:34:45.370558 kubelet[2805]: I0122 00:34:45.370063 2805 server.go:310] "Adding debug handlers to kubelet server" Jan 22 00:34:45.375774 kubelet[2805]: I0122 00:34:45.375132 2805 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 22 00:34:45.377798 kubelet[2805]: I0122 00:34:45.377766 2805 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 22 00:34:45.378942 kubelet[2805]: I0122 00:34:45.378902 2805 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 22 00:34:45.380990 kubelet[2805]: I0122 00:34:45.380954 2805 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 22 00:34:45.383436 kubelet[2805]: I0122 00:34:45.383395 2805 reconciler.go:29] "Reconciler: start to sync state" Jan 22 00:34:45.389190 kubelet[2805]: I0122 00:34:45.388318 2805 factory.go:223] Registration of the systemd container factory successfully Jan 22 00:34:45.389190 kubelet[2805]: I0122 00:34:45.388664 2805 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 22 00:34:45.392382 kubelet[2805]: I0122 00:34:45.392216 2805 factory.go:223] Registration of the containerd container factory successfully Jan 22 00:34:45.395637 kubelet[2805]: E0122 00:34:45.395614 2805 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 22 00:34:45.418015 kubelet[2805]: I0122 00:34:45.417948 2805 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 22 00:34:45.420442 kubelet[2805]: I0122 00:34:45.420411 2805 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 22 00:34:45.420442 kubelet[2805]: I0122 00:34:45.420434 2805 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 22 00:34:45.420442 kubelet[2805]: I0122 00:34:45.420456 2805 kubelet.go:2427] "Starting kubelet main sync loop" Jan 22 00:34:45.420882 kubelet[2805]: E0122 00:34:45.420498 2805 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 22 00:34:45.470727 kubelet[2805]: I0122 00:34:45.469380 2805 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 22 00:34:45.470727 kubelet[2805]: I0122 00:34:45.469402 2805 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 22 00:34:45.470727 kubelet[2805]: I0122 00:34:45.469429 2805 state_mem.go:36] "Initialized new in-memory state store" Jan 22 00:34:45.470727 kubelet[2805]: I0122 00:34:45.469594 2805 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 22 00:34:45.470727 kubelet[2805]: I0122 00:34:45.469605 2805 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 22 00:34:45.470727 kubelet[2805]: I0122 00:34:45.469623 2805 policy_none.go:49] "None policy: Start" Jan 22 00:34:45.470727 kubelet[2805]: I0122 00:34:45.469633 2805 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 22 00:34:45.470727 kubelet[2805]: I0122 00:34:45.469644 2805 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 22 00:34:45.470727 kubelet[2805]: I0122 00:34:45.469755 2805 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Jan 22 00:34:45.470727 kubelet[2805]: I0122 00:34:45.469764 2805 policy_none.go:47] "Start" Jan 22 00:34:45.477919 kubelet[2805]: E0122 00:34:45.477900 2805 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 22 00:34:45.479834 kubelet[2805]: I0122 00:34:45.479818 2805 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 22 00:34:45.479983 kubelet[2805]: I0122 00:34:45.479944 2805 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 22 00:34:45.484731 kubelet[2805]: I0122 00:34:45.484711 2805 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 22 00:34:45.487908 kubelet[2805]: E0122 00:34:45.487885 2805 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 22 00:34:45.522897 kubelet[2805]: I0122 00:34:45.521991 2805 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-232-4-171" Jan 22 00:34:45.525740 kubelet[2805]: I0122 00:34:45.524315 2805 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-232-4-171" Jan 22 00:34:45.526157 kubelet[2805]: I0122 00:34:45.524473 2805 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-232-4-171" Jan 22 00:34:45.587173 kubelet[2805]: I0122 00:34:45.586963 2805 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0cd11350b9cd6b96c6a160c7a6d1b89b-k8s-certs\") pod \"kube-apiserver-172-232-4-171\" (UID: \"0cd11350b9cd6b96c6a160c7a6d1b89b\") " pod="kube-system/kube-apiserver-172-232-4-171" Jan 22 00:34:45.587173 kubelet[2805]: I0122 00:34:45.587170 2805 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0cd11350b9cd6b96c6a160c7a6d1b89b-usr-share-ca-certificates\") pod \"kube-apiserver-172-232-4-171\" (UID: \"0cd11350b9cd6b96c6a160c7a6d1b89b\") " pod="kube-system/kube-apiserver-172-232-4-171" Jan 22 00:34:45.587322 kubelet[2805]: I0122 00:34:45.587210 2805 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9f95bfd63324691327f670865e035878-flexvolume-dir\") pod \"kube-controller-manager-172-232-4-171\" (UID: \"9f95bfd63324691327f670865e035878\") " pod="kube-system/kube-controller-manager-172-232-4-171" Jan 22 00:34:45.587322 kubelet[2805]: I0122 00:34:45.587276 2805 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9f95bfd63324691327f670865e035878-kubeconfig\") pod \"kube-controller-manager-172-232-4-171\" (UID: \"9f95bfd63324691327f670865e035878\") " pod="kube-system/kube-controller-manager-172-232-4-171" Jan 22 00:34:45.587375 kubelet[2805]: I0122 00:34:45.587307 2805 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9f95bfd63324691327f670865e035878-usr-share-ca-certificates\") pod \"kube-controller-manager-172-232-4-171\" (UID: \"9f95bfd63324691327f670865e035878\") " pod="kube-system/kube-controller-manager-172-232-4-171" Jan 22 00:34:45.587375 kubelet[2805]: I0122 00:34:45.587365 2805 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0cd11350b9cd6b96c6a160c7a6d1b89b-ca-certs\") pod \"kube-apiserver-172-232-4-171\" (UID: \"0cd11350b9cd6b96c6a160c7a6d1b89b\") " pod="kube-system/kube-apiserver-172-232-4-171" Jan 22 00:34:45.587424 kubelet[2805]: I0122 00:34:45.587388 2805 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9f95bfd63324691327f670865e035878-ca-certs\") pod \"kube-controller-manager-172-232-4-171\" (UID: \"9f95bfd63324691327f670865e035878\") " pod="kube-system/kube-controller-manager-172-232-4-171" Jan 22 00:34:45.587461 kubelet[2805]: I0122 00:34:45.587439 2805 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9f95bfd63324691327f670865e035878-k8s-certs\") pod \"kube-controller-manager-172-232-4-171\" (UID: \"9f95bfd63324691327f670865e035878\") " pod="kube-system/kube-controller-manager-172-232-4-171" Jan 22 00:34:45.587490 kubelet[2805]: I0122 00:34:45.587464 2805 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/38481fab5631e772b42181c1dae77b6c-kubeconfig\") pod \"kube-scheduler-172-232-4-171\" (UID: \"38481fab5631e772b42181c1dae77b6c\") " pod="kube-system/kube-scheduler-172-232-4-171" Jan 22 00:34:45.602721 kubelet[2805]: I0122 00:34:45.602203 2805 kubelet_node_status.go:75] "Attempting to register node" node="172-232-4-171" Jan 22 00:34:45.611231 kubelet[2805]: I0122 00:34:45.610469 2805 kubelet_node_status.go:124] "Node was previously registered" node="172-232-4-171" Jan 22 00:34:45.611231 kubelet[2805]: I0122 00:34:45.610558 2805 kubelet_node_status.go:78] "Successfully registered node" node="172-232-4-171" Jan 22 00:34:45.834478 kubelet[2805]: E0122 00:34:45.833697 2805 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 22 00:34:45.835845 kubelet[2805]: E0122 00:34:45.835818 2805 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 22 00:34:45.836099 kubelet[2805]: E0122 00:34:45.835954 2805 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 22 00:34:46.353062 kubelet[2805]: I0122 00:34:46.353031 2805 apiserver.go:52] "Watching apiserver" Jan 22 00:34:46.381950 kubelet[2805]: I0122 00:34:46.381892 2805 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 22 00:34:46.456553 kubelet[2805]: I0122 00:34:46.456204 2805 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-232-4-171" Jan 22 00:34:46.457002 kubelet[2805]: I0122 00:34:46.456989 2805 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-232-4-171" Jan 22 00:34:46.458611 kubelet[2805]: E0122 00:34:46.458570 2805 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 22 00:34:46.464221 kubelet[2805]: E0122 00:34:46.463755 2805 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-232-4-171\" already exists" pod="kube-system/kube-scheduler-172-232-4-171" Jan 22 00:34:46.464221 kubelet[2805]: E0122 00:34:46.463892 2805 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 22 00:34:46.465719 kubelet[2805]: E0122 00:34:46.465691 2805 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-232-4-171\" already exists" pod="kube-system/kube-apiserver-172-232-4-171" Jan 22 00:34:46.466058 kubelet[2805]: E0122 00:34:46.466045 2805 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 22 00:34:46.478764 kubelet[2805]: I0122 00:34:46.478361 2805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-232-4-171" podStartSLOduration=1.478350629 podStartE2EDuration="1.478350629s" podCreationTimestamp="2026-01-22 00:34:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 00:34:46.478138391 +0000 UTC m=+1.200036577" watchObservedRunningTime="2026-01-22 00:34:46.478350629 +0000 UTC m=+1.200248815" Jan 22 00:34:46.491349 kubelet[2805]: I0122 00:34:46.491269 2805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-232-4-171" podStartSLOduration=1.491255624 podStartE2EDuration="1.491255624s" podCreationTimestamp="2026-01-22 00:34:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 00:34:46.490816907 +0000 UTC m=+1.212715093" watchObservedRunningTime="2026-01-22 00:34:46.491255624 +0000 UTC m=+1.213153810" Jan 22 00:34:46.491479 kubelet[2805]: I0122 00:34:46.491434 2805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-232-4-171" podStartSLOduration=1.491430624 podStartE2EDuration="1.491430624s" podCreationTimestamp="2026-01-22 00:34:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 00:34:46.484207053 +0000 UTC m=+1.206105239" watchObservedRunningTime="2026-01-22 00:34:46.491430624 +0000 UTC m=+1.213328820" Jan 22 00:34:47.458160 kubelet[2805]: E0122 00:34:47.458121 2805 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 22 00:34:47.459366 kubelet[2805]: E0122 00:34:47.459352 2805 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 22 00:34:48.459757 kubelet[2805]: E0122 00:34:48.459722 2805 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 22 00:34:48.774111 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 22 00:34:48.774000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:48.783000 audit: BPF prog-id=128 op=UNLOAD Jan 22 00:34:50.585247 kubelet[2805]: I0122 00:34:50.585189 2805 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 22 00:34:50.585706 containerd[1629]: time="2026-01-22T00:34:50.585506731Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 22 00:34:50.585940 kubelet[2805]: I0122 00:34:50.585910 2805 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 22 00:34:51.221793 kubelet[2805]: I0122 00:34:51.221659 2805 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f5cd7868-159b-4168-b599-a358a01ed35c-lib-modules\") pod \"kube-proxy-gk6gm\" (UID: \"f5cd7868-159b-4168-b599-a358a01ed35c\") " pod="kube-system/kube-proxy-gk6gm" Jan 22 00:34:51.221793 kubelet[2805]: I0122 00:34:51.221695 2805 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f5cd7868-159b-4168-b599-a358a01ed35c-kube-proxy\") pod \"kube-proxy-gk6gm\" (UID: \"f5cd7868-159b-4168-b599-a358a01ed35c\") " pod="kube-system/kube-proxy-gk6gm" Jan 22 00:34:51.221793 kubelet[2805]: I0122 00:34:51.221713 2805 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f5cd7868-159b-4168-b599-a358a01ed35c-xtables-lock\") pod \"kube-proxy-gk6gm\" (UID: \"f5cd7868-159b-4168-b599-a358a01ed35c\") " pod="kube-system/kube-proxy-gk6gm" Jan 22 00:34:51.221793 kubelet[2805]: I0122 00:34:51.221729 2805 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wtvc\" (UniqueName: \"kubernetes.io/projected/f5cd7868-159b-4168-b599-a358a01ed35c-kube-api-access-6wtvc\") pod \"kube-proxy-gk6gm\" (UID: \"f5cd7868-159b-4168-b599-a358a01ed35c\") " pod="kube-system/kube-proxy-gk6gm" Jan 22 00:34:51.223056 systemd[1]: Created slice kubepods-besteffort-podf5cd7868_159b_4168_b599_a358a01ed35c.slice - libcontainer container kubepods-besteffort-podf5cd7868_159b_4168_b599_a358a01ed35c.slice. Jan 22 00:34:51.532892 kubelet[2805]: E0122 00:34:51.532782 2805 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 22 00:34:51.535197 containerd[1629]: time="2026-01-22T00:34:51.535147728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gk6gm,Uid:f5cd7868-159b-4168-b599-a358a01ed35c,Namespace:kube-system,Attempt:0,}" Jan 22 00:34:51.564176 containerd[1629]: time="2026-01-22T00:34:51.564107070Z" level=info msg="connecting to shim 81c27c93f4bce3a14c44a92473697a7d1c62cf3acc569ca1e1df9c46991bdda7" address="unix:///run/containerd/s/82f7e1f50c99a16c321b3f9d5e8a4ce9a63d53f1b347bd9e8e3b6dd4351c72d5" namespace=k8s.io protocol=ttrpc version=3 Jan 22 00:34:51.589681 systemd[1]: Started cri-containerd-81c27c93f4bce3a14c44a92473697a7d1c62cf3acc569ca1e1df9c46991bdda7.scope - libcontainer container 81c27c93f4bce3a14c44a92473697a7d1c62cf3acc569ca1e1df9c46991bdda7. Jan 22 00:34:51.613000 audit: BPF prog-id=145 op=LOAD Jan 22 00:34:51.616213 kernel: kauditd_printk_skb: 42 callbacks suppressed Jan 22 00:34:51.616279 kernel: audit: type=1334 audit(1769042091.613:453): prog-id=145 op=LOAD Jan 22 00:34:51.619000 audit: BPF prog-id=146 op=LOAD Jan 22 00:34:51.624548 kernel: audit: type=1334 audit(1769042091.619:454): prog-id=146 op=LOAD Jan 22 00:34:51.624721 kernel: audit: type=1300 audit(1769042091.619:454): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2864 pid=2875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:51.619000 audit[2875]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2864 pid=2875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:51.619000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831633237633933663462636533613134633434613932343733363937 Jan 22 00:34:51.646544 kernel: audit: type=1327 audit(1769042091.619:454): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831633237633933663462636533613134633434613932343733363937 Jan 22 00:34:51.619000 audit: BPF prog-id=146 op=UNLOAD Jan 22 00:34:51.655841 kernel: audit: type=1334 audit(1769042091.619:455): prog-id=146 op=UNLOAD Jan 22 00:34:51.655903 kernel: audit: type=1300 audit(1769042091.619:455): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2864 pid=2875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:51.619000 audit[2875]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2864 pid=2875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:51.619000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831633237633933663462636533613134633434613932343733363937 Jan 22 00:34:51.672376 kubelet[2805]: E0122 00:34:51.668030 2805 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 22 00:34:51.672784 kernel: audit: type=1327 audit(1769042091.619:455): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831633237633933663462636533613134633434613932343733363937 Jan 22 00:34:51.672822 containerd[1629]: time="2026-01-22T00:34:51.666450210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gk6gm,Uid:f5cd7868-159b-4168-b599-a358a01ed35c,Namespace:kube-system,Attempt:0,} returns sandbox id \"81c27c93f4bce3a14c44a92473697a7d1c62cf3acc569ca1e1df9c46991bdda7\"" Jan 22 00:34:51.619000 audit: BPF prog-id=147 op=LOAD Jan 22 00:34:51.678551 kernel: audit: type=1334 audit(1769042091.619:456): prog-id=147 op=LOAD Jan 22 00:34:51.678635 kernel: audit: type=1300 audit(1769042091.619:456): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2864 pid=2875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:51.619000 audit[2875]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2864 pid=2875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:51.619000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831633237633933663462636533613134633434613932343733363937 Jan 22 00:34:51.685789 kernel: audit: type=1327 audit(1769042091.619:456): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831633237633933663462636533613134633434613932343733363937 Jan 22 00:34:51.619000 audit: BPF prog-id=148 op=LOAD Jan 22 00:34:51.619000 audit[2875]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2864 pid=2875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:51.619000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831633237633933663462636533613134633434613932343733363937 Jan 22 00:34:51.619000 audit: BPF prog-id=148 op=UNLOAD Jan 22 00:34:51.619000 audit[2875]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2864 pid=2875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:51.619000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831633237633933663462636533613134633434613932343733363937 Jan 22 00:34:51.619000 audit: BPF prog-id=147 op=UNLOAD Jan 22 00:34:51.619000 audit[2875]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2864 pid=2875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:51.619000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831633237633933663462636533613134633434613932343733363937 Jan 22 00:34:51.619000 audit: BPF prog-id=149 op=LOAD Jan 22 00:34:51.619000 audit[2875]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2864 pid=2875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:51.619000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831633237633933663462636533613134633434613932343733363937 Jan 22 00:34:51.693769 containerd[1629]: time="2026-01-22T00:34:51.693608418Z" level=info msg="CreateContainer within sandbox \"81c27c93f4bce3a14c44a92473697a7d1c62cf3acc569ca1e1df9c46991bdda7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 22 00:34:51.710342 containerd[1629]: time="2026-01-22T00:34:51.708665479Z" level=info msg="Container 7cf3017c155127428436df9de281bb038fc8e90827221f957eda9f46af3f950d: CDI devices from CRI Config.CDIDevices: []" Jan 22 00:34:51.717428 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2224432338.mount: Deactivated successfully. Jan 22 00:34:51.726866 containerd[1629]: time="2026-01-22T00:34:51.726830500Z" level=info msg="CreateContainer within sandbox \"81c27c93f4bce3a14c44a92473697a7d1c62cf3acc569ca1e1df9c46991bdda7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7cf3017c155127428436df9de281bb038fc8e90827221f957eda9f46af3f950d\"" Jan 22 00:34:51.728198 containerd[1629]: time="2026-01-22T00:34:51.728171899Z" level=info msg="StartContainer for \"7cf3017c155127428436df9de281bb038fc8e90827221f957eda9f46af3f950d\"" Jan 22 00:34:51.730672 containerd[1629]: time="2026-01-22T00:34:51.730582815Z" level=info msg="connecting to shim 7cf3017c155127428436df9de281bb038fc8e90827221f957eda9f46af3f950d" address="unix:///run/containerd/s/82f7e1f50c99a16c321b3f9d5e8a4ce9a63d53f1b347bd9e8e3b6dd4351c72d5" protocol=ttrpc version=3 Jan 22 00:34:51.765759 systemd[1]: Started cri-containerd-7cf3017c155127428436df9de281bb038fc8e90827221f957eda9f46af3f950d.scope - libcontainer container 7cf3017c155127428436df9de281bb038fc8e90827221f957eda9f46af3f950d. Jan 22 00:34:51.775080 systemd[1]: Created slice kubepods-besteffort-pod31fa398a_9285_4b85_a847_3e54301efd50.slice - libcontainer container kubepods-besteffort-pod31fa398a_9285_4b85_a847_3e54301efd50.slice. Jan 22 00:34:51.827068 kubelet[2805]: I0122 00:34:51.826260 2805 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/31fa398a-9285-4b85-a847-3e54301efd50-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-xt9gx\" (UID: \"31fa398a-9285-4b85-a847-3e54301efd50\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-xt9gx" Jan 22 00:34:51.827068 kubelet[2805]: I0122 00:34:51.827008 2805 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hprxf\" (UniqueName: \"kubernetes.io/projected/31fa398a-9285-4b85-a847-3e54301efd50-kube-api-access-hprxf\") pod \"tigera-operator-65cdcdfd6d-xt9gx\" (UID: \"31fa398a-9285-4b85-a847-3e54301efd50\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-xt9gx" Jan 22 00:34:51.846000 audit: BPF prog-id=150 op=LOAD Jan 22 00:34:51.846000 audit[2901]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2864 pid=2901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:51.846000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763663330313763313535313237343238343336646639646532383162 Jan 22 00:34:51.846000 audit: BPF prog-id=151 op=LOAD Jan 22 00:34:51.846000 audit[2901]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2864 pid=2901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:51.846000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763663330313763313535313237343238343336646639646532383162 Jan 22 00:34:51.846000 audit: BPF prog-id=151 op=UNLOAD Jan 22 00:34:51.846000 audit[2901]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2864 pid=2901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:51.846000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763663330313763313535313237343238343336646639646532383162 Jan 22 00:34:51.846000 audit: BPF prog-id=150 op=UNLOAD Jan 22 00:34:51.846000 audit[2901]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2864 pid=2901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:51.846000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763663330313763313535313237343238343336646639646532383162 Jan 22 00:34:51.846000 audit: BPF prog-id=152 op=LOAD Jan 22 00:34:51.846000 audit[2901]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2864 pid=2901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:51.846000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763663330313763313535313237343238343336646639646532383162 Jan 22 00:34:51.870090 containerd[1629]: time="2026-01-22T00:34:51.869949051Z" level=info msg="StartContainer for \"7cf3017c155127428436df9de281bb038fc8e90827221f957eda9f46af3f950d\" returns successfully" Jan 22 00:34:52.081859 containerd[1629]: time="2026-01-22T00:34:52.081464715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-xt9gx,Uid:31fa398a-9285-4b85-a847-3e54301efd50,Namespace:tigera-operator,Attempt:0,}" Jan 22 00:34:52.095000 audit[2977]: NETFILTER_CFG table=mangle:54 family=10 entries=1 op=nft_register_chain pid=2977 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:34:52.099611 containerd[1629]: time="2026-01-22T00:34:52.099572112Z" level=info msg="connecting to shim 98d357341cc0d675fca3bc32a48eca3cf26f9b36d5e269067c4ab3fd19e08752" address="unix:///run/containerd/s/8eea01afe93797943e72009fe2a48598f7eaeb0a487b61f6b73911928e5716e3" namespace=k8s.io protocol=ttrpc version=3 Jan 22 00:34:52.095000 audit[2977]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffca45e9bd0 a2=0 a3=7ffca45e9bbc items=0 ppid=2914 pid=2977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:52.095000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 22 00:34:52.102000 audit[2979]: NETFILTER_CFG table=nat:55 family=10 entries=1 op=nft_register_chain pid=2979 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:34:52.102000 audit[2979]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd7979cfe0 a2=0 a3=7ffd7979cfcc items=0 ppid=2914 pid=2979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:52.102000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 22 00:34:52.107000 audit[2982]: NETFILTER_CFG table=filter:56 family=10 entries=1 op=nft_register_chain pid=2982 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:34:52.107000 audit[2982]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc8e00e110 a2=0 a3=7ffc8e00e0fc items=0 ppid=2914 pid=2982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:52.107000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 22 00:34:52.111000 audit[2991]: NETFILTER_CFG table=mangle:57 family=2 entries=1 op=nft_register_chain pid=2991 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:34:52.111000 audit[2991]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffe4e7ad70 a2=0 a3=7fffe4e7ad5c items=0 ppid=2914 pid=2991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:52.111000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 22 00:34:52.115000 audit[2996]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2996 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:34:52.115000 audit[2996]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffedea04fd0 a2=0 a3=7ffedea04fbc items=0 ppid=2914 pid=2996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:52.115000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 22 00:34:52.118000 audit[2997]: NETFILTER_CFG table=filter:59 family=2 entries=1 op=nft_register_chain pid=2997 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:34:52.118000 audit[2997]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffd8b506d0 a2=0 a3=7fffd8b506bc items=0 ppid=2914 pid=2997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:52.118000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 22 00:34:52.138037 systemd[1]: Started cri-containerd-98d357341cc0d675fca3bc32a48eca3cf26f9b36d5e269067c4ab3fd19e08752.scope - libcontainer container 98d357341cc0d675fca3bc32a48eca3cf26f9b36d5e269067c4ab3fd19e08752. Jan 22 00:34:52.151000 audit: BPF prog-id=153 op=LOAD Jan 22 00:34:52.151000 audit: BPF prog-id=154 op=LOAD Jan 22 00:34:52.151000 audit[2995]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=2976 pid=2995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:52.151000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938643335373334316363306436373566636133626333326134386563 Jan 22 00:34:52.151000 audit: BPF prog-id=154 op=UNLOAD Jan 22 00:34:52.151000 audit[2995]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2976 pid=2995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:52.151000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938643335373334316363306436373566636133626333326134386563 Jan 22 00:34:52.152000 audit: BPF prog-id=155 op=LOAD Jan 22 00:34:52.152000 audit[2995]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=2976 pid=2995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:52.152000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938643335373334316363306436373566636133626333326134386563 Jan 22 00:34:52.152000 audit: BPF prog-id=156 op=LOAD Jan 22 00:34:52.152000 audit[2995]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=2976 pid=2995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:52.152000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938643335373334316363306436373566636133626333326134386563 Jan 22 00:34:52.152000 audit: BPF prog-id=156 op=UNLOAD Jan 22 00:34:52.152000 audit[2995]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2976 pid=2995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:52.152000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938643335373334316363306436373566636133626333326134386563 Jan 22 00:34:52.152000 audit: BPF prog-id=155 op=UNLOAD Jan 22 00:34:52.152000 audit[2995]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2976 pid=2995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:52.152000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938643335373334316363306436373566636133626333326134386563 Jan 22 00:34:52.153000 audit: BPF prog-id=157 op=LOAD Jan 22 00:34:52.153000 audit[2995]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=2976 pid=2995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:52.153000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938643335373334316363306436373566636133626333326134386563 Jan 22 00:34:52.190166 containerd[1629]: time="2026-01-22T00:34:52.190090593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-xt9gx,Uid:31fa398a-9285-4b85-a847-3e54301efd50,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"98d357341cc0d675fca3bc32a48eca3cf26f9b36d5e269067c4ab3fd19e08752\"" Jan 22 00:34:52.194924 containerd[1629]: time="2026-01-22T00:34:52.194553494Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 22 00:34:52.204000 audit[3022]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3022 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:34:52.204000 audit[3022]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe25f7c790 a2=0 a3=7ffe25f7c77c items=0 ppid=2914 pid=3022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:52.204000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 22 00:34:52.209000 audit[3024]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3024 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:34:52.209000 audit[3024]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc7700bd00 a2=0 a3=7ffc7700bcec items=0 ppid=2914 pid=3024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:52.209000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73002D Jan 22 00:34:52.214000 audit[3027]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3027 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:34:52.214000 audit[3027]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff29477d40 a2=0 a3=7fff29477d2c items=0 ppid=2914 pid=3027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:52.214000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73 Jan 22 00:34:52.216000 audit[3028]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3028 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:34:52.216000 audit[3028]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdae762160 a2=0 a3=7ffdae76214c items=0 ppid=2914 pid=3028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:52.216000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 22 00:34:52.219000 audit[3030]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3030 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:34:52.219000 audit[3030]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd76167610 a2=0 a3=7ffd761675fc items=0 ppid=2914 pid=3030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:52.219000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 22 00:34:52.224000 audit[3031]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3031 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:34:52.224000 audit[3031]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffde4e658a0 a2=0 a3=7ffde4e6588c items=0 ppid=2914 pid=3031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:52.224000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D5345525649434553002D740066696C746572 Jan 22 00:34:52.228000 audit[3033]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3033 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:34:52.228000 audit[3033]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd87baa100 a2=0 a3=7ffd87baa0ec items=0 ppid=2914 pid=3033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:52.228000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 22 00:34:52.233000 audit[3036]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3036 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:34:52.233000 audit[3036]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fffcdf31360 a2=0 a3=7fffcdf3134c items=0 ppid=2914 pid=3036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:52.233000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 22 00:34:52.234000 audit[3037]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3037 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:34:52.234000 audit[3037]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff862aacc0 a2=0 a3=7fff862aacac items=0 ppid=2914 pid=3037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:52.234000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D464F5257415244002D740066696C746572 Jan 22 00:34:52.239000 audit[3039]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3039 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:34:52.239000 audit[3039]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff5d3a7bb0 a2=0 a3=7fff5d3a7b9c items=0 ppid=2914 pid=3039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:52.239000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 22 00:34:52.240000 audit[3040]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3040 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:34:52.240000 audit[3040]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc57d88ec0 a2=0 a3=7ffc57d88eac items=0 ppid=2914 pid=3040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:52.240000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 22 00:34:52.244000 audit[3042]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3042 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:34:52.244000 audit[3042]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdcb398e90 a2=0 a3=7ffdcb398e7c items=0 ppid=2914 pid=3042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:52.244000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F5859 Jan 22 00:34:52.249000 audit[3045]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3045 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:34:52.249000 audit[3045]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc92e805f0 a2=0 a3=7ffc92e805dc items=0 ppid=2914 pid=3045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:52.249000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F58 Jan 22 00:34:52.254000 audit[3048]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3048 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:34:52.254000 audit[3048]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffff3955a20 a2=0 a3=7ffff3955a0c items=0 ppid=2914 pid=3048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:52.254000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F Jan 22 00:34:52.255000 audit[3049]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3049 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:34:52.255000 audit[3049]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffa177e710 a2=0 a3=7fffa177e6fc items=0 ppid=2914 pid=3049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:52.255000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D5345525649434553002D74006E6174 Jan 22 00:34:52.259000 audit[3051]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3051 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:34:52.259000 audit[3051]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffe6f300d30 a2=0 a3=7ffe6f300d1c items=0 ppid=2914 pid=3051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:52.259000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 22 00:34:52.263000 audit[3054]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3054 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:34:52.263000 audit[3054]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffeef1aa680 a2=0 a3=7ffeef1aa66c items=0 ppid=2914 pid=3054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:52.263000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 22 00:34:52.265000 audit[3055]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3055 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:34:52.265000 audit[3055]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffda7458f90 a2=0 a3=7ffda7458f7c items=0 ppid=2914 pid=3055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:52.265000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 22 00:34:52.268000 audit[3057]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3057 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:34:52.268000 audit[3057]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffca8e7f3c0 a2=0 a3=7ffca8e7f3ac items=0 ppid=2914 pid=3057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:52.268000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 22 00:34:52.290000 audit[3063]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3063 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:34:52.290000 audit[3063]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc9ff126d0 a2=0 a3=7ffc9ff126bc items=0 ppid=2914 pid=3063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:52.290000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:34:52.298000 audit[3063]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3063 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:34:52.298000 audit[3063]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffc9ff126d0 a2=0 a3=7ffc9ff126bc items=0 ppid=2914 pid=3063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:52.298000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:34:52.300000 audit[3068]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3068 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:34:52.300000 audit[3068]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe3406e7a0 a2=0 a3=7ffe3406e78c items=0 ppid=2914 pid=3068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:52.300000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 22 00:34:52.304000 audit[3070]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3070 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:34:52.304000 audit[3070]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fff26b4ae20 a2=0 a3=7fff26b4ae0c items=0 ppid=2914 pid=3070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:52.304000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73 Jan 22 00:34:52.309000 audit[3073]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3073 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:34:52.309000 audit[3073]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd5a0e5ef0 a2=0 a3=7ffd5a0e5edc items=0 ppid=2914 pid=3073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:52.309000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C Jan 22 00:34:52.310000 audit[3074]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3074 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:34:52.310000 audit[3074]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe6e6dd600 a2=0 a3=7ffe6e6dd5ec items=0 ppid=2914 pid=3074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:52.310000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 22 00:34:52.313000 audit[3076]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3076 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:34:52.313000 audit[3076]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc44735270 a2=0 a3=7ffc4473525c items=0 ppid=2914 pid=3076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:52.313000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 22 00:34:52.315000 audit[3077]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3077 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:34:52.315000 audit[3077]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff3be4b700 a2=0 a3=7fff3be4b6ec items=0 ppid=2914 pid=3077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:52.315000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D5345525649434553002D740066696C746572 Jan 22 00:34:52.318000 audit[3079]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3079 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:34:52.318000 audit[3079]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc8a3e0c10 a2=0 a3=7ffc8a3e0bfc items=0 ppid=2914 pid=3079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:52.318000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 22 00:34:52.323000 audit[3082]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3082 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:34:52.323000 audit[3082]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffecebcef50 a2=0 a3=7ffecebcef3c items=0 ppid=2914 pid=3082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:52.323000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 22 00:34:52.324000 audit[3083]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3083 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:34:52.324000 audit[3083]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd7daec7f0 a2=0 a3=7ffd7daec7dc items=0 ppid=2914 pid=3083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:52.324000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D464F5257415244002D740066696C746572 Jan 22 00:34:52.327000 audit[3085]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3085 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:34:52.327000 audit[3085]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd5e5dc790 a2=0 a3=7ffd5e5dc77c items=0 ppid=2914 pid=3085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:52.327000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 22 00:34:52.328000 audit[3086]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3086 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:34:52.328000 audit[3086]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe6c5fd750 a2=0 a3=7ffe6c5fd73c items=0 ppid=2914 pid=3086 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:52.328000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 22 00:34:52.337000 audit[3088]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3088 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:34:52.337000 audit[3088]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc74a6afa0 a2=0 a3=7ffc74a6af8c items=0 ppid=2914 pid=3088 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:52.337000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F58 Jan 22 00:34:52.345000 audit[3091]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3091 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:34:52.345000 audit[3091]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff1e1b5350 a2=0 a3=7fff1e1b533c items=0 ppid=2914 pid=3091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:52.345000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F Jan 22 00:34:52.350000 audit[3094]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3094 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:34:52.350000 audit[3094]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffe5673680 a2=0 a3=7fffe567366c items=0 ppid=2914 pid=3094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:52.350000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D5052 Jan 22 00:34:52.352000 audit[3095]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3095 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:34:52.352000 audit[3095]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffea318eca0 a2=0 a3=7ffea318ec8c items=0 ppid=2914 pid=3095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:52.352000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D5345525649434553002D74006E6174 Jan 22 00:34:52.355000 audit[3097]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3097 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:34:52.355000 audit[3097]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7fffe71a2c50 a2=0 a3=7fffe71a2c3c items=0 ppid=2914 pid=3097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:52.355000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 22 00:34:52.360000 audit[3100]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3100 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:34:52.360000 audit[3100]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe519b1170 a2=0 a3=7ffe519b115c items=0 ppid=2914 pid=3100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:52.360000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 22 00:34:52.361000 audit[3101]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3101 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:34:52.361000 audit[3101]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffd2d58450 a2=0 a3=7fffd2d5843c items=0 ppid=2914 pid=3101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:52.361000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 22 00:34:52.364000 audit[3103]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3103 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:34:52.364000 audit[3103]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffca2baf9a0 a2=0 a3=7ffca2baf98c items=0 ppid=2914 pid=3103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:52.364000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 22 00:34:52.366000 audit[3104]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3104 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:34:52.366000 audit[3104]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe70bee080 a2=0 a3=7ffe70bee06c items=0 ppid=2914 pid=3104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:52.366000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 22 00:34:52.369000 audit[3106]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3106 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:34:52.369000 audit[3106]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc31736800 a2=0 a3=7ffc317367ec items=0 ppid=2914 pid=3106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:52.369000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 22 00:34:52.376000 audit[3109]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3109 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:34:52.376000 audit[3109]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fffb0bf6f80 a2=0 a3=7fffb0bf6f6c items=0 ppid=2914 pid=3109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:52.376000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 22 00:34:52.383000 audit[3111]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3111 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 22 00:34:52.383000 audit[3111]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffde4f09250 a2=0 a3=7ffde4f0923c items=0 ppid=2914 pid=3111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:52.383000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:34:52.384000 audit[3111]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3111 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 22 00:34:52.384000 audit[3111]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffde4f09250 a2=0 a3=7ffde4f0923c items=0 ppid=2914 pid=3111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:52.384000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:34:52.469165 kubelet[2805]: E0122 00:34:52.469119 2805 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 22 00:34:52.479015 kubelet[2805]: I0122 00:34:52.478922 2805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gk6gm" podStartSLOduration=1.4789053 podStartE2EDuration="1.4789053s" podCreationTimestamp="2026-01-22 00:34:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 00:34:52.478714631 +0000 UTC m=+7.200612817" watchObservedRunningTime="2026-01-22 00:34:52.4789053 +0000 UTC m=+7.200803486" Jan 22 00:34:52.647353 kubelet[2805]: E0122 00:34:52.646628 2805 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 22 00:34:53.047275 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4029097948.mount: Deactivated successfully. Jan 22 00:34:53.472393 kubelet[2805]: E0122 00:34:53.472177 2805 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 22 00:34:53.539152 containerd[1629]: time="2026-01-22T00:34:53.538438270Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:34:53.539152 containerd[1629]: time="2026-01-22T00:34:53.539123643Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=0" Jan 22 00:34:53.539688 containerd[1629]: time="2026-01-22T00:34:53.539667067Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:34:53.540970 containerd[1629]: time="2026-01-22T00:34:53.540951112Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:34:53.541679 containerd[1629]: time="2026-01-22T00:34:53.541658672Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 1.34622091s" Jan 22 00:34:53.541743 containerd[1629]: time="2026-01-22T00:34:53.541730846Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 22 00:34:53.546090 containerd[1629]: time="2026-01-22T00:34:53.546068487Z" level=info msg="CreateContainer within sandbox \"98d357341cc0d675fca3bc32a48eca3cf26f9b36d5e269067c4ab3fd19e08752\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 22 00:34:53.555401 containerd[1629]: time="2026-01-22T00:34:53.554666853Z" level=info msg="Container 9720ea2f51026a31486cebc1c7b7015bb922304228e3600782094974f9f480e1: CDI devices from CRI Config.CDIDevices: []" Jan 22 00:34:53.561857 containerd[1629]: time="2026-01-22T00:34:53.561828881Z" level=info msg="CreateContainer within sandbox \"98d357341cc0d675fca3bc32a48eca3cf26f9b36d5e269067c4ab3fd19e08752\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"9720ea2f51026a31486cebc1c7b7015bb922304228e3600782094974f9f480e1\"" Jan 22 00:34:53.562504 containerd[1629]: time="2026-01-22T00:34:53.562476631Z" level=info msg="StartContainer for \"9720ea2f51026a31486cebc1c7b7015bb922304228e3600782094974f9f480e1\"" Jan 22 00:34:53.563667 containerd[1629]: time="2026-01-22T00:34:53.563639555Z" level=info msg="connecting to shim 9720ea2f51026a31486cebc1c7b7015bb922304228e3600782094974f9f480e1" address="unix:///run/containerd/s/8eea01afe93797943e72009fe2a48598f7eaeb0a487b61f6b73911928e5716e3" protocol=ttrpc version=3 Jan 22 00:34:53.585690 systemd[1]: Started cri-containerd-9720ea2f51026a31486cebc1c7b7015bb922304228e3600782094974f9f480e1.scope - libcontainer container 9720ea2f51026a31486cebc1c7b7015bb922304228e3600782094974f9f480e1. Jan 22 00:34:53.600000 audit: BPF prog-id=158 op=LOAD Jan 22 00:34:53.601000 audit: BPF prog-id=159 op=LOAD Jan 22 00:34:53.601000 audit[3120]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2976 pid=3120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:53.601000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937323065613266353130323661333134383663656263316337623730 Jan 22 00:34:53.601000 audit: BPF prog-id=159 op=UNLOAD Jan 22 00:34:53.601000 audit[3120]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2976 pid=3120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:53.601000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937323065613266353130323661333134383663656263316337623730 Jan 22 00:34:53.601000 audit: BPF prog-id=160 op=LOAD Jan 22 00:34:53.601000 audit[3120]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2976 pid=3120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:53.601000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937323065613266353130323661333134383663656263316337623730 Jan 22 00:34:53.601000 audit: BPF prog-id=161 op=LOAD Jan 22 00:34:53.601000 audit[3120]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2976 pid=3120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:53.601000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937323065613266353130323661333134383663656263316337623730 Jan 22 00:34:53.601000 audit: BPF prog-id=161 op=UNLOAD Jan 22 00:34:53.601000 audit[3120]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2976 pid=3120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:53.601000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937323065613266353130323661333134383663656263316337623730 Jan 22 00:34:53.601000 audit: BPF prog-id=160 op=UNLOAD Jan 22 00:34:53.601000 audit[3120]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2976 pid=3120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:53.601000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937323065613266353130323661333134383663656263316337623730 Jan 22 00:34:53.601000 audit: BPF prog-id=162 op=LOAD Jan 22 00:34:53.601000 audit[3120]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2976 pid=3120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:53.601000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937323065613266353130323661333134383663656263316337623730 Jan 22 00:34:53.621091 containerd[1629]: time="2026-01-22T00:34:53.621044791Z" level=info msg="StartContainer for \"9720ea2f51026a31486cebc1c7b7015bb922304228e3600782094974f9f480e1\" returns successfully" Jan 22 00:34:54.483489 kubelet[2805]: I0122 00:34:54.482993 2805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-xt9gx" podStartSLOduration=2.133885733 podStartE2EDuration="3.48297275s" podCreationTimestamp="2026-01-22 00:34:51 +0000 UTC" firstStartedPulling="2026-01-22 00:34:52.193660715 +0000 UTC m=+6.915558901" lastFinishedPulling="2026-01-22 00:34:53.542747732 +0000 UTC m=+8.264645918" observedRunningTime="2026-01-22 00:34:54.482734864 +0000 UTC m=+9.204633050" watchObservedRunningTime="2026-01-22 00:34:54.48297275 +0000 UTC m=+9.204870946" Jan 22 00:34:55.175711 kubelet[2805]: E0122 00:34:55.175658 2805 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 22 00:34:55.477909 kubelet[2805]: E0122 00:34:55.477771 2805 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 22 00:34:57.167635 kubelet[2805]: E0122 00:34:57.167539 2805 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 22 00:34:57.479175 kubelet[2805]: E0122 00:34:57.479078 2805 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 22 00:34:59.225000 audit[1871]: USER_END pid=1871 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 22 00:34:59.226676 sudo[1871]: pam_unix(sudo:session): session closed for user root Jan 22 00:34:59.230525 kernel: kauditd_printk_skb: 224 callbacks suppressed Jan 22 00:34:59.230586 kernel: audit: type=1106 audit(1769042099.225:533): pid=1871 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 22 00:34:59.225000 audit[1871]: CRED_DISP pid=1871 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 22 00:34:59.244696 kernel: audit: type=1104 audit(1769042099.225:534): pid=1871 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 22 00:34:59.253582 sshd[1870]: Connection closed by 20.161.92.111 port 41502 Jan 22 00:34:59.255572 sshd-session[1867]: pam_unix(sshd:session): session closed for user core Jan 22 00:34:59.265000 audit[1867]: USER_END pid=1867 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:34:59.272269 systemd[1]: sshd@6-172.232.4.171:22-20.161.92.111:41502.service: Deactivated successfully. Jan 22 00:34:59.280829 kernel: audit: type=1106 audit(1769042099.265:535): pid=1867 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:34:59.265000 audit[1867]: CRED_DISP pid=1867 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:34:59.288855 systemd[1]: session-7.scope: Deactivated successfully. Jan 22 00:34:59.294979 kernel: audit: type=1104 audit(1769042099.265:536): pid=1867 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:34:59.295037 kernel: audit: type=1131 audit(1769042099.271:537): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.232.4.171:22-20.161.92.111:41502 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:59.271000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.232.4.171:22-20.161.92.111:41502 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:59.290282 systemd[1]: session-7.scope: Consumed 4.515s CPU time, 229.5M memory peak. Jan 22 00:34:59.297750 systemd-logind[1596]: Session 7 logged out. Waiting for processes to exit. Jan 22 00:34:59.300361 systemd-logind[1596]: Removed session 7. Jan 22 00:35:00.198000 audit[3203]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3203 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:35:00.198000 audit[3203]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff7073bec0 a2=0 a3=7fff7073beac items=0 ppid=2914 pid=3203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:00.207790 kernel: audit: type=1325 audit(1769042100.198:538): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3203 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:35:00.207844 kernel: audit: type=1300 audit(1769042100.198:538): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff7073bec0 a2=0 a3=7fff7073beac items=0 ppid=2914 pid=3203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:00.198000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:35:00.214000 audit[3203]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3203 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:35:00.221497 kernel: audit: type=1327 audit(1769042100.198:538): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:35:00.221547 kernel: audit: type=1325 audit(1769042100.214:539): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3203 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:35:00.214000 audit[3203]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff7073bec0 a2=0 a3=0 items=0 ppid=2914 pid=3203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:00.214000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:35:00.235535 kernel: audit: type=1300 audit(1769042100.214:539): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff7073bec0 a2=0 a3=0 items=0 ppid=2914 pid=3203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:01.241000 audit[3205]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3205 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:35:01.241000 audit[3205]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff8f10c1e0 a2=0 a3=7fff8f10c1cc items=0 ppid=2914 pid=3205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:01.241000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:35:01.246000 audit[3205]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3205 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:35:01.246000 audit[3205]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff8f10c1e0 a2=0 a3=0 items=0 ppid=2914 pid=3205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:01.246000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:35:02.508000 audit[3207]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3207 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:35:02.508000 audit[3207]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffe45b954b0 a2=0 a3=7ffe45b9549c items=0 ppid=2914 pid=3207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:02.508000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:35:02.517000 audit[3207]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3207 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:35:02.517000 audit[3207]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe45b954b0 a2=0 a3=0 items=0 ppid=2914 pid=3207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:02.517000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:35:03.580000 audit[3209]: NETFILTER_CFG table=filter:111 family=2 entries=19 op=nft_register_rule pid=3209 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:35:03.580000 audit[3209]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe9635f650 a2=0 a3=7ffe9635f63c items=0 ppid=2914 pid=3209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:03.580000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:35:03.594000 audit[3209]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3209 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:35:03.594000 audit[3209]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe9635f650 a2=0 a3=0 items=0 ppid=2914 pid=3209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:03.594000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:35:03.702767 update_engine[1597]: I20260122 00:35:03.702405 1597 update_attempter.cc:509] Updating boot flags... Jan 22 00:35:04.768623 kernel: kauditd_printk_skb: 19 callbacks suppressed Jan 22 00:35:04.768917 kernel: audit: type=1325 audit(1769042104.746:546): table=filter:113 family=2 entries=21 op=nft_register_rule pid=3235 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:35:04.746000 audit[3235]: NETFILTER_CFG table=filter:113 family=2 entries=21 op=nft_register_rule pid=3235 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:35:04.746000 audit[3235]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fff4e885060 a2=0 a3=7fff4e88504c items=0 ppid=2914 pid=3235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:04.795233 kernel: audit: type=1300 audit(1769042104.746:546): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fff4e885060 a2=0 a3=7fff4e88504c items=0 ppid=2914 pid=3235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:04.795318 kernel: audit: type=1327 audit(1769042104.746:546): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:35:04.746000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:35:04.806000 audit[3235]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3235 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:35:04.813728 kernel: audit: type=1325 audit(1769042104.806:547): table=nat:114 family=2 entries=12 op=nft_register_rule pid=3235 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:35:04.806000 audit[3235]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff4e885060 a2=0 a3=0 items=0 ppid=2914 pid=3235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:04.823832 kernel: audit: type=1300 audit(1769042104.806:547): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff4e885060 a2=0 a3=0 items=0 ppid=2914 pid=3235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:04.806000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:35:04.830552 kernel: audit: type=1327 audit(1769042104.806:547): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:35:04.846668 systemd[1]: Created slice kubepods-besteffort-pod1a930ab4_6d50_412b_9229_00090f78cd9f.slice - libcontainer container kubepods-besteffort-pod1a930ab4_6d50_412b_9229_00090f78cd9f.slice. Jan 22 00:35:04.961170 kubelet[2805]: I0122 00:35:04.961117 2805 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1a930ab4-6d50-412b-9229-00090f78cd9f-tigera-ca-bundle\") pod \"calico-typha-55d4f8f45b-fcxw7\" (UID: \"1a930ab4-6d50-412b-9229-00090f78cd9f\") " pod="calico-system/calico-typha-55d4f8f45b-fcxw7" Jan 22 00:35:04.961170 kubelet[2805]: I0122 00:35:04.961164 2805 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/1a930ab4-6d50-412b-9229-00090f78cd9f-typha-certs\") pod \"calico-typha-55d4f8f45b-fcxw7\" (UID: \"1a930ab4-6d50-412b-9229-00090f78cd9f\") " pod="calico-system/calico-typha-55d4f8f45b-fcxw7" Jan 22 00:35:04.964566 kubelet[2805]: I0122 00:35:04.961188 2805 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pj427\" (UniqueName: \"kubernetes.io/projected/1a930ab4-6d50-412b-9229-00090f78cd9f-kube-api-access-pj427\") pod \"calico-typha-55d4f8f45b-fcxw7\" (UID: \"1a930ab4-6d50-412b-9229-00090f78cd9f\") " pod="calico-system/calico-typha-55d4f8f45b-fcxw7" Jan 22 00:35:04.962085 systemd[1]: Created slice kubepods-besteffort-pode4e7c73c_c6fc_4dab_baa7_4b921b36faa2.slice - libcontainer container kubepods-besteffort-pode4e7c73c_c6fc_4dab_baa7_4b921b36faa2.slice. Jan 22 00:35:05.063424 kubelet[2805]: I0122 00:35:05.061886 2805 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e4e7c73c-c6fc-4dab-baa7-4b921b36faa2-var-lib-calico\") pod \"calico-node-48nss\" (UID: \"e4e7c73c-c6fc-4dab-baa7-4b921b36faa2\") " pod="calico-system/calico-node-48nss" Jan 22 00:35:05.063424 kubelet[2805]: I0122 00:35:05.061919 2805 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgnrg\" (UniqueName: \"kubernetes.io/projected/e4e7c73c-c6fc-4dab-baa7-4b921b36faa2-kube-api-access-tgnrg\") pod \"calico-node-48nss\" (UID: \"e4e7c73c-c6fc-4dab-baa7-4b921b36faa2\") " pod="calico-system/calico-node-48nss" Jan 22 00:35:05.063424 kubelet[2805]: I0122 00:35:05.061934 2805 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e4e7c73c-c6fc-4dab-baa7-4b921b36faa2-cni-bin-dir\") pod \"calico-node-48nss\" (UID: \"e4e7c73c-c6fc-4dab-baa7-4b921b36faa2\") " pod="calico-system/calico-node-48nss" Jan 22 00:35:05.063424 kubelet[2805]: I0122 00:35:05.061958 2805 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e4e7c73c-c6fc-4dab-baa7-4b921b36faa2-cni-log-dir\") pod \"calico-node-48nss\" (UID: \"e4e7c73c-c6fc-4dab-baa7-4b921b36faa2\") " pod="calico-system/calico-node-48nss" Jan 22 00:35:05.063424 kubelet[2805]: I0122 00:35:05.061971 2805 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e4e7c73c-c6fc-4dab-baa7-4b921b36faa2-xtables-lock\") pod \"calico-node-48nss\" (UID: \"e4e7c73c-c6fc-4dab-baa7-4b921b36faa2\") " pod="calico-system/calico-node-48nss" Jan 22 00:35:05.063701 kubelet[2805]: I0122 00:35:05.061994 2805 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e4e7c73c-c6fc-4dab-baa7-4b921b36faa2-cni-net-dir\") pod \"calico-node-48nss\" (UID: \"e4e7c73c-c6fc-4dab-baa7-4b921b36faa2\") " pod="calico-system/calico-node-48nss" Jan 22 00:35:05.063701 kubelet[2805]: I0122 00:35:05.062009 2805 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e4e7c73c-c6fc-4dab-baa7-4b921b36faa2-flexvol-driver-host\") pod \"calico-node-48nss\" (UID: \"e4e7c73c-c6fc-4dab-baa7-4b921b36faa2\") " pod="calico-system/calico-node-48nss" Jan 22 00:35:05.063701 kubelet[2805]: I0122 00:35:05.062396 2805 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e4e7c73c-c6fc-4dab-baa7-4b921b36faa2-lib-modules\") pod \"calico-node-48nss\" (UID: \"e4e7c73c-c6fc-4dab-baa7-4b921b36faa2\") " pod="calico-system/calico-node-48nss" Jan 22 00:35:05.063701 kubelet[2805]: I0122 00:35:05.062444 2805 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e4e7c73c-c6fc-4dab-baa7-4b921b36faa2-policysync\") pod \"calico-node-48nss\" (UID: \"e4e7c73c-c6fc-4dab-baa7-4b921b36faa2\") " pod="calico-system/calico-node-48nss" Jan 22 00:35:05.063701 kubelet[2805]: I0122 00:35:05.062480 2805 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e4e7c73c-c6fc-4dab-baa7-4b921b36faa2-node-certs\") pod \"calico-node-48nss\" (UID: \"e4e7c73c-c6fc-4dab-baa7-4b921b36faa2\") " pod="calico-system/calico-node-48nss" Jan 22 00:35:05.063856 kubelet[2805]: I0122 00:35:05.062705 2805 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4e7c73c-c6fc-4dab-baa7-4b921b36faa2-tigera-ca-bundle\") pod \"calico-node-48nss\" (UID: \"e4e7c73c-c6fc-4dab-baa7-4b921b36faa2\") " pod="calico-system/calico-node-48nss" Jan 22 00:35:05.063856 kubelet[2805]: I0122 00:35:05.062722 2805 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e4e7c73c-c6fc-4dab-baa7-4b921b36faa2-var-run-calico\") pod \"calico-node-48nss\" (UID: \"e4e7c73c-c6fc-4dab-baa7-4b921b36faa2\") " pod="calico-system/calico-node-48nss" Jan 22 00:35:05.157303 kubelet[2805]: E0122 00:35:05.157221 2805 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wm244" podUID="ae45d206-785e-4662-9efc-4b0987941483" Jan 22 00:35:05.173643 kubelet[2805]: E0122 00:35:05.173619 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:05.175575 kubelet[2805]: W0122 00:35:05.175548 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:05.175713 kubelet[2805]: E0122 00:35:05.175696 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:05.176717 kubelet[2805]: E0122 00:35:05.176102 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:05.176717 kubelet[2805]: W0122 00:35:05.176124 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:05.176717 kubelet[2805]: E0122 00:35:05.176141 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:05.176717 kubelet[2805]: E0122 00:35:05.176376 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:05.176717 kubelet[2805]: W0122 00:35:05.176387 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:05.176717 kubelet[2805]: E0122 00:35:05.176400 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:05.176936 kubelet[2805]: E0122 00:35:05.176917 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:05.176936 kubelet[2805]: W0122 00:35:05.176927 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:05.177015 kubelet[2805]: E0122 00:35:05.176940 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:05.177712 kubelet[2805]: E0122 00:35:05.177222 2805 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 22 00:35:05.178250 containerd[1629]: time="2026-01-22T00:35:05.178209066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-55d4f8f45b-fcxw7,Uid:1a930ab4-6d50-412b-9229-00090f78cd9f,Namespace:calico-system,Attempt:0,}" Jan 22 00:35:05.181158 kubelet[2805]: E0122 00:35:05.179607 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:05.181158 kubelet[2805]: W0122 00:35:05.179618 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:05.181158 kubelet[2805]: E0122 00:35:05.179631 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:05.181158 kubelet[2805]: E0122 00:35:05.180077 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:05.181158 kubelet[2805]: W0122 00:35:05.180089 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:05.181158 kubelet[2805]: E0122 00:35:05.180103 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:05.181828 kubelet[2805]: E0122 00:35:05.181804 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:05.184091 kubelet[2805]: W0122 00:35:05.181838 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:05.184091 kubelet[2805]: E0122 00:35:05.181852 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:05.184756 kubelet[2805]: E0122 00:35:05.184362 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:05.184756 kubelet[2805]: W0122 00:35:05.184378 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:05.184756 kubelet[2805]: E0122 00:35:05.184391 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:05.185302 kubelet[2805]: E0122 00:35:05.185230 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:05.185467 kubelet[2805]: W0122 00:35:05.185272 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:05.185467 kubelet[2805]: E0122 00:35:05.185409 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:05.219007 kubelet[2805]: E0122 00:35:05.218755 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:05.219007 kubelet[2805]: W0122 00:35:05.218781 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:05.219007 kubelet[2805]: E0122 00:35:05.218834 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:05.221195 kubelet[2805]: E0122 00:35:05.220989 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:05.221195 kubelet[2805]: W0122 00:35:05.221007 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:05.222633 kubelet[2805]: E0122 00:35:05.222391 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:05.224037 kubelet[2805]: E0122 00:35:05.223833 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:05.224037 kubelet[2805]: W0122 00:35:05.223971 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:05.224037 kubelet[2805]: E0122 00:35:05.223990 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:05.229356 containerd[1629]: time="2026-01-22T00:35:05.229317330Z" level=info msg="connecting to shim f30501bfb2b078f4ca24da8f33a293bcb0a1627da463e9bb2965dbd3d6c4a855" address="unix:///run/containerd/s/c40dd84e12ac20bc35471a930b4b3948629650c3b446e62baaf74f438f1ad257" namespace=k8s.io protocol=ttrpc version=3 Jan 22 00:35:05.234235 kubelet[2805]: E0122 00:35:05.234000 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:05.234235 kubelet[2805]: W0122 00:35:05.234017 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:05.234235 kubelet[2805]: E0122 00:35:05.234034 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:05.234949 kubelet[2805]: E0122 00:35:05.234904 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:05.235021 kubelet[2805]: W0122 00:35:05.235008 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:05.235579 kubelet[2805]: E0122 00:35:05.235089 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:05.236545 kubelet[2805]: E0122 00:35:05.236285 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:05.236545 kubelet[2805]: W0122 00:35:05.236300 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:05.236545 kubelet[2805]: E0122 00:35:05.236313 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:05.237141 kubelet[2805]: E0122 00:35:05.237041 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:05.237141 kubelet[2805]: W0122 00:35:05.237081 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:05.237141 kubelet[2805]: E0122 00:35:05.237095 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:05.239325 kubelet[2805]: E0122 00:35:05.239139 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:05.239325 kubelet[2805]: W0122 00:35:05.239154 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:05.239325 kubelet[2805]: E0122 00:35:05.239167 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:05.243234 kubelet[2805]: E0122 00:35:05.243217 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:05.243498 kubelet[2805]: W0122 00:35:05.243297 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:05.243498 kubelet[2805]: E0122 00:35:05.243314 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:05.244643 kubelet[2805]: E0122 00:35:05.244628 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:05.249461 kubelet[2805]: W0122 00:35:05.249297 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:05.249461 kubelet[2805]: E0122 00:35:05.249332 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:05.253723 kubelet[2805]: E0122 00:35:05.253251 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:05.253723 kubelet[2805]: W0122 00:35:05.253262 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:05.253723 kubelet[2805]: E0122 00:35:05.253275 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:05.256999 kubelet[2805]: E0122 00:35:05.256983 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:05.257160 kubelet[2805]: W0122 00:35:05.257064 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:05.257160 kubelet[2805]: E0122 00:35:05.257082 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:05.257651 kubelet[2805]: E0122 00:35:05.257605 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:05.257651 kubelet[2805]: W0122 00:35:05.257620 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:05.257651 kubelet[2805]: E0122 00:35:05.257633 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:05.258105 kubelet[2805]: E0122 00:35:05.258056 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:05.258105 kubelet[2805]: W0122 00:35:05.258073 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:05.258105 kubelet[2805]: E0122 00:35:05.258087 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:05.258765 kubelet[2805]: E0122 00:35:05.258583 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:05.258765 kubelet[2805]: W0122 00:35:05.258693 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:05.258765 kubelet[2805]: E0122 00:35:05.258708 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:05.259973 kubelet[2805]: E0122 00:35:05.259426 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:05.259973 kubelet[2805]: W0122 00:35:05.259441 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:05.259973 kubelet[2805]: E0122 00:35:05.259453 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:05.260779 kubelet[2805]: E0122 00:35:05.260764 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:05.260939 kubelet[2805]: W0122 00:35:05.260923 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:05.261105 kubelet[2805]: E0122 00:35:05.261088 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:05.261951 kubelet[2805]: E0122 00:35:05.261790 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:05.261951 kubelet[2805]: W0122 00:35:05.261804 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:05.261951 kubelet[2805]: E0122 00:35:05.261815 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:05.262853 kubelet[2805]: E0122 00:35:05.262684 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:05.262853 kubelet[2805]: W0122 00:35:05.262700 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:05.262853 kubelet[2805]: E0122 00:35:05.262713 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:05.264179 kubelet[2805]: E0122 00:35:05.264164 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:05.264312 kubelet[2805]: W0122 00:35:05.264239 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:05.264312 kubelet[2805]: E0122 00:35:05.264256 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:05.266550 kubelet[2805]: E0122 00:35:05.266415 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:05.266550 kubelet[2805]: W0122 00:35:05.266429 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:05.266550 kubelet[2805]: E0122 00:35:05.266441 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:05.267715 kubelet[2805]: E0122 00:35:05.267344 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:05.267715 kubelet[2805]: W0122 00:35:05.267359 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:05.267715 kubelet[2805]: E0122 00:35:05.267371 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:05.267715 kubelet[2805]: I0122 00:35:05.267396 2805 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ae45d206-785e-4662-9efc-4b0987941483-registration-dir\") pod \"csi-node-driver-wm244\" (UID: \"ae45d206-785e-4662-9efc-4b0987941483\") " pod="calico-system/csi-node-driver-wm244" Jan 22 00:35:05.269680 kubelet[2805]: E0122 00:35:05.269137 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:05.269680 kubelet[2805]: W0122 00:35:05.269155 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:05.269680 kubelet[2805]: E0122 00:35:05.269169 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:05.269680 kubelet[2805]: I0122 00:35:05.269192 2805 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ae45d206-785e-4662-9efc-4b0987941483-socket-dir\") pod \"csi-node-driver-wm244\" (UID: \"ae45d206-785e-4662-9efc-4b0987941483\") " pod="calico-system/csi-node-driver-wm244" Jan 22 00:35:05.271555 kubelet[2805]: E0122 00:35:05.271458 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:05.271850 kubelet[2805]: W0122 00:35:05.271738 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:05.271966 kubelet[2805]: E0122 00:35:05.271919 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:05.272280 kubelet[2805]: I0122 00:35:05.272088 2805 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ae45d206-785e-4662-9efc-4b0987941483-kubelet-dir\") pod \"csi-node-driver-wm244\" (UID: \"ae45d206-785e-4662-9efc-4b0987941483\") " pod="calico-system/csi-node-driver-wm244" Jan 22 00:35:05.273844 kubelet[2805]: E0122 00:35:05.273681 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:05.273844 kubelet[2805]: W0122 00:35:05.273697 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:05.273844 kubelet[2805]: E0122 00:35:05.273712 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:05.273844 kubelet[2805]: I0122 00:35:05.273735 2805 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dqcc\" (UniqueName: \"kubernetes.io/projected/ae45d206-785e-4662-9efc-4b0987941483-kube-api-access-9dqcc\") pod \"csi-node-driver-wm244\" (UID: \"ae45d206-785e-4662-9efc-4b0987941483\") " pod="calico-system/csi-node-driver-wm244" Jan 22 00:35:05.274370 kubelet[2805]: E0122 00:35:05.270358 2805 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 22 00:35:05.274806 kubelet[2805]: E0122 00:35:05.274700 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:05.275418 kubelet[2805]: W0122 00:35:05.275079 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:05.275418 kubelet[2805]: E0122 00:35:05.275102 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:05.275418 kubelet[2805]: I0122 00:35:05.275123 2805 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/ae45d206-785e-4662-9efc-4b0987941483-varrun\") pod \"csi-node-driver-wm244\" (UID: \"ae45d206-785e-4662-9efc-4b0987941483\") " pod="calico-system/csi-node-driver-wm244" Jan 22 00:35:05.277354 kubelet[2805]: E0122 00:35:05.276990 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:05.277354 kubelet[2805]: W0122 00:35:05.277217 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:05.277354 kubelet[2805]: E0122 00:35:05.277236 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:05.280084 containerd[1629]: time="2026-01-22T00:35:05.279614441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-48nss,Uid:e4e7c73c-c6fc-4dab-baa7-4b921b36faa2,Namespace:calico-system,Attempt:0,}" Jan 22 00:35:05.280145 kubelet[2805]: E0122 00:35:05.280042 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:05.280145 kubelet[2805]: W0122 00:35:05.280054 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:05.280388 kubelet[2805]: E0122 00:35:05.280248 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:05.280833 kubelet[2805]: E0122 00:35:05.280819 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:05.281209 kubelet[2805]: W0122 00:35:05.281166 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:05.281209 kubelet[2805]: E0122 00:35:05.281189 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:05.282014 kubelet[2805]: E0122 00:35:05.281979 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:05.282771 kubelet[2805]: W0122 00:35:05.282751 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:05.283000 kubelet[2805]: E0122 00:35:05.282932 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:05.285027 kubelet[2805]: E0122 00:35:05.284789 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:05.285027 kubelet[2805]: W0122 00:35:05.284817 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:05.285027 kubelet[2805]: E0122 00:35:05.284833 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:05.286240 kubelet[2805]: E0122 00:35:05.286171 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:05.286240 kubelet[2805]: W0122 00:35:05.286210 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:05.286240 kubelet[2805]: E0122 00:35:05.286224 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:05.286914 kubelet[2805]: E0122 00:35:05.286790 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:05.286914 kubelet[2805]: W0122 00:35:05.286835 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:05.286914 kubelet[2805]: E0122 00:35:05.286850 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:05.287474 kubelet[2805]: E0122 00:35:05.287209 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:05.287474 kubelet[2805]: W0122 00:35:05.287272 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:05.287474 kubelet[2805]: E0122 00:35:05.287289 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:05.288566 kubelet[2805]: E0122 00:35:05.287905 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:05.288566 kubelet[2805]: W0122 00:35:05.287946 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:05.288566 kubelet[2805]: E0122 00:35:05.287958 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:05.288566 kubelet[2805]: E0122 00:35:05.288290 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:05.288566 kubelet[2805]: W0122 00:35:05.288300 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:05.288566 kubelet[2805]: E0122 00:35:05.288312 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:05.307028 systemd[1]: Started cri-containerd-f30501bfb2b078f4ca24da8f33a293bcb0a1627da463e9bb2965dbd3d6c4a855.scope - libcontainer container f30501bfb2b078f4ca24da8f33a293bcb0a1627da463e9bb2965dbd3d6c4a855. Jan 22 00:35:05.349000 audit: BPF prog-id=163 op=LOAD Jan 22 00:35:05.353559 kernel: audit: type=1334 audit(1769042105.349:548): prog-id=163 op=LOAD Jan 22 00:35:05.359849 kernel: audit: type=1334 audit(1769042105.351:549): prog-id=164 op=LOAD Jan 22 00:35:05.351000 audit: BPF prog-id=164 op=LOAD Jan 22 00:35:05.360581 containerd[1629]: time="2026-01-22T00:35:05.360463728Z" level=info msg="connecting to shim 9bf6089003235d11bb9c818e5c17096a2aae0f10d61accc37fb5f497ab8d47bf" address="unix:///run/containerd/s/147a7a2f11ec64a1bd06d2a67bad47ca3ef29b2047e6385a9bc92fb814442eeb" namespace=k8s.io protocol=ttrpc version=3 Jan 22 00:35:05.378592 kernel: audit: type=1300 audit(1769042105.351:549): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3266 pid=3292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:05.351000 audit[3292]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3266 pid=3292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:05.379288 kubelet[2805]: E0122 00:35:05.378187 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:05.379288 kubelet[2805]: W0122 00:35:05.378218 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:05.379288 kubelet[2805]: E0122 00:35:05.378246 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:05.379687 kubelet[2805]: E0122 00:35:05.379667 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:05.379687 kubelet[2805]: W0122 00:35:05.379685 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:05.379796 kubelet[2805]: E0122 00:35:05.379702 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:05.381674 kubelet[2805]: E0122 00:35:05.381647 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:05.381674 kubelet[2805]: W0122 00:35:05.381667 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:05.381793 kubelet[2805]: E0122 00:35:05.381683 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:05.382101 kubelet[2805]: E0122 00:35:05.382049 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:05.382101 kubelet[2805]: W0122 00:35:05.382062 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:05.382101 kubelet[2805]: E0122 00:35:05.382077 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:05.382846 kubelet[2805]: E0122 00:35:05.382829 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:05.382846 kubelet[2805]: W0122 00:35:05.382842 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:05.382946 kubelet[2805]: E0122 00:35:05.382856 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:05.384608 kubelet[2805]: E0122 00:35:05.384583 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:05.384608 kubelet[2805]: W0122 00:35:05.384601 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:05.384695 kubelet[2805]: E0122 00:35:05.384616 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:05.384901 kubelet[2805]: E0122 00:35:05.384877 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:05.384901 kubelet[2805]: W0122 00:35:05.384896 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:05.384996 kubelet[2805]: E0122 00:35:05.384909 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:05.351000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633303530316266623262303738663463613234646138663333613239 Jan 22 00:35:05.395937 kubelet[2805]: E0122 00:35:05.385663 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:05.395937 kubelet[2805]: W0122 00:35:05.385678 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:05.395937 kubelet[2805]: E0122 00:35:05.385697 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:05.395937 kubelet[2805]: E0122 00:35:05.386627 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:05.395937 kubelet[2805]: W0122 00:35:05.386639 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:05.395937 kubelet[2805]: E0122 00:35:05.386651 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:05.395937 kubelet[2805]: E0122 00:35:05.388001 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:05.395937 kubelet[2805]: W0122 00:35:05.388014 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:05.395937 kubelet[2805]: E0122 00:35:05.388032 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:05.395937 kubelet[2805]: E0122 00:35:05.389877 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:05.396266 kubelet[2805]: W0122 00:35:05.389891 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:05.396266 kubelet[2805]: E0122 00:35:05.389908 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:05.396266 kubelet[2805]: E0122 00:35:05.390483 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:05.396266 kubelet[2805]: W0122 00:35:05.390498 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:05.396266 kubelet[2805]: E0122 00:35:05.390762 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:05.396266 kubelet[2805]: E0122 00:35:05.391686 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:05.396266 kubelet[2805]: W0122 00:35:05.391700 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:05.396266 kubelet[2805]: E0122 00:35:05.391717 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:05.396266 kubelet[2805]: E0122 00:35:05.392587 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:05.396266 kubelet[2805]: W0122 00:35:05.392600 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:05.396595 kernel: audit: type=1327 audit(1769042105.351:549): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633303530316266623262303738663463613234646138663333613239 Jan 22 00:35:05.396646 kubelet[2805]: E0122 00:35:05.392617 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:05.396646 kubelet[2805]: E0122 00:35:05.394182 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:05.396646 kubelet[2805]: W0122 00:35:05.394196 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:05.396646 kubelet[2805]: E0122 00:35:05.394213 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:05.396646 kubelet[2805]: E0122 00:35:05.395265 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:05.396646 kubelet[2805]: W0122 00:35:05.395294 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:05.396646 kubelet[2805]: E0122 00:35:05.395311 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:05.396646 kubelet[2805]: E0122 00:35:05.396184 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:05.396646 kubelet[2805]: W0122 00:35:05.396195 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:05.396646 kubelet[2805]: E0122 00:35:05.396207 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:05.397524 kubelet[2805]: E0122 00:35:05.397081 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:05.397524 kubelet[2805]: W0122 00:35:05.397094 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:05.397524 kubelet[2805]: E0122 00:35:05.397107 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:05.352000 audit: BPF prog-id=164 op=UNLOAD Jan 22 00:35:05.352000 audit[3292]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3266 pid=3292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:05.352000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633303530316266623262303738663463613234646138663333613239 Jan 22 00:35:05.354000 audit: BPF prog-id=165 op=LOAD Jan 22 00:35:05.354000 audit[3292]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3266 pid=3292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:05.354000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633303530316266623262303738663463613234646138663333613239 Jan 22 00:35:05.354000 audit: BPF prog-id=166 op=LOAD Jan 22 00:35:05.354000 audit[3292]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3266 pid=3292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:05.399207 kubelet[2805]: E0122 00:35:05.399186 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:05.399207 kubelet[2805]: W0122 00:35:05.399198 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:05.399281 kubelet[2805]: E0122 00:35:05.399211 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:05.354000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633303530316266623262303738663463613234646138663333613239 Jan 22 00:35:05.354000 audit: BPF prog-id=166 op=UNLOAD Jan 22 00:35:05.354000 audit[3292]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3266 pid=3292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:05.354000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633303530316266623262303738663463613234646138663333613239 Jan 22 00:35:05.354000 audit: BPF prog-id=165 op=UNLOAD Jan 22 00:35:05.354000 audit[3292]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3266 pid=3292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:05.354000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633303530316266623262303738663463613234646138663333613239 Jan 22 00:35:05.355000 audit: BPF prog-id=167 op=LOAD Jan 22 00:35:05.355000 audit[3292]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3266 pid=3292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:05.355000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633303530316266623262303738663463613234646138663333613239 Jan 22 00:35:05.400194 kubelet[2805]: E0122 00:35:05.400132 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:05.400194 kubelet[2805]: W0122 00:35:05.400143 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:05.400194 kubelet[2805]: E0122 00:35:05.400156 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:05.402540 kubelet[2805]: E0122 00:35:05.401640 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:05.402540 kubelet[2805]: W0122 00:35:05.401655 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:05.402540 kubelet[2805]: E0122 00:35:05.401667 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:05.402540 kubelet[2805]: E0122 00:35:05.401981 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:05.402540 kubelet[2805]: W0122 00:35:05.401991 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:05.402540 kubelet[2805]: E0122 00:35:05.402005 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:05.402769 kubelet[2805]: E0122 00:35:05.402593 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:05.402769 kubelet[2805]: W0122 00:35:05.402603 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:05.402769 kubelet[2805]: E0122 00:35:05.402616 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:05.404130 kubelet[2805]: E0122 00:35:05.403992 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:05.404130 kubelet[2805]: W0122 00:35:05.404013 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:05.404130 kubelet[2805]: E0122 00:35:05.404028 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:05.405640 kubelet[2805]: E0122 00:35:05.405622 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:05.405760 kubelet[2805]: W0122 00:35:05.405717 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:05.405986 kubelet[2805]: E0122 00:35:05.405825 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:05.436166 systemd[1]: Started cri-containerd-9bf6089003235d11bb9c818e5c17096a2aae0f10d61accc37fb5f497ab8d47bf.scope - libcontainer container 9bf6089003235d11bb9c818e5c17096a2aae0f10d61accc37fb5f497ab8d47bf. Jan 22 00:35:05.438241 kubelet[2805]: E0122 00:35:05.438207 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:05.438241 kubelet[2805]: W0122 00:35:05.438224 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:05.438371 kubelet[2805]: E0122 00:35:05.438245 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:05.463000 audit: BPF prog-id=168 op=LOAD Jan 22 00:35:05.475000 audit: BPF prog-id=169 op=LOAD Jan 22 00:35:05.475000 audit[3370]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=3340 pid=3370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:05.475000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962663630383930303332333564313162623963383138653563313730 Jan 22 00:35:05.476000 audit: BPF prog-id=169 op=UNLOAD Jan 22 00:35:05.476000 audit[3370]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3340 pid=3370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:05.476000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962663630383930303332333564313162623963383138653563313730 Jan 22 00:35:05.477000 audit: BPF prog-id=170 op=LOAD Jan 22 00:35:05.477000 audit[3370]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=3340 pid=3370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:05.477000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962663630383930303332333564313162623963383138653563313730 Jan 22 00:35:05.477000 audit: BPF prog-id=171 op=LOAD Jan 22 00:35:05.477000 audit[3370]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=3340 pid=3370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:05.477000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962663630383930303332333564313162623963383138653563313730 Jan 22 00:35:05.477000 audit: BPF prog-id=171 op=UNLOAD Jan 22 00:35:05.477000 audit[3370]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3340 pid=3370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:05.477000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962663630383930303332333564313162623963383138653563313730 Jan 22 00:35:05.477000 audit: BPF prog-id=170 op=UNLOAD Jan 22 00:35:05.477000 audit[3370]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3340 pid=3370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:05.477000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962663630383930303332333564313162623963383138653563313730 Jan 22 00:35:05.477000 audit: BPF prog-id=172 op=LOAD Jan 22 00:35:05.477000 audit[3370]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=3340 pid=3370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:05.477000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962663630383930303332333564313162623963383138653563313730 Jan 22 00:35:05.482942 containerd[1629]: time="2026-01-22T00:35:05.482843504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-55d4f8f45b-fcxw7,Uid:1a930ab4-6d50-412b-9229-00090f78cd9f,Namespace:calico-system,Attempt:0,} returns sandbox id \"f30501bfb2b078f4ca24da8f33a293bcb0a1627da463e9bb2965dbd3d6c4a855\"" Jan 22 00:35:05.486342 kubelet[2805]: E0122 00:35:05.486313 2805 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 22 00:35:05.489177 containerd[1629]: time="2026-01-22T00:35:05.489107987Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 22 00:35:05.538330 containerd[1629]: time="2026-01-22T00:35:05.538287286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-48nss,Uid:e4e7c73c-c6fc-4dab-baa7-4b921b36faa2,Namespace:calico-system,Attempt:0,} returns sandbox id \"9bf6089003235d11bb9c818e5c17096a2aae0f10d61accc37fb5f497ab8d47bf\"" Jan 22 00:35:05.539955 kubelet[2805]: E0122 00:35:05.539894 2805 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 22 00:35:05.838000 audit[3412]: NETFILTER_CFG table=filter:115 family=2 entries=22 op=nft_register_rule pid=3412 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:35:05.838000 audit[3412]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffd26615740 a2=0 a3=7ffd2661572c items=0 ppid=2914 pid=3412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:05.838000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:35:05.842000 audit[3412]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3412 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:35:05.842000 audit[3412]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd26615740 a2=0 a3=0 items=0 ppid=2914 pid=3412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:05.842000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:35:06.236474 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4193669544.mount: Deactivated successfully. Jan 22 00:35:06.421274 kubelet[2805]: E0122 00:35:06.420964 2805 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wm244" podUID="ae45d206-785e-4662-9efc-4b0987941483" Jan 22 00:35:06.998561 containerd[1629]: time="2026-01-22T00:35:06.998461886Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:35:06.999949 containerd[1629]: time="2026-01-22T00:35:06.999553262Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Jan 22 00:35:07.000566 containerd[1629]: time="2026-01-22T00:35:07.000534069Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:35:07.003591 containerd[1629]: time="2026-01-22T00:35:07.003559262Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:35:07.004527 containerd[1629]: time="2026-01-22T00:35:07.004483252Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 1.515338809s" Jan 22 00:35:07.004642 containerd[1629]: time="2026-01-22T00:35:07.004621415Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 22 00:35:07.006721 containerd[1629]: time="2026-01-22T00:35:07.006694232Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 22 00:35:07.033644 containerd[1629]: time="2026-01-22T00:35:07.033463890Z" level=info msg="CreateContainer within sandbox \"f30501bfb2b078f4ca24da8f33a293bcb0a1627da463e9bb2965dbd3d6c4a855\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 22 00:35:07.041058 containerd[1629]: time="2026-01-22T00:35:07.040559975Z" level=info msg="Container 3c295836945d803bb101a6f74d32c424996551c52b4cb1c9a2efec4652ac9359: CDI devices from CRI Config.CDIDevices: []" Jan 22 00:35:07.046399 containerd[1629]: time="2026-01-22T00:35:07.046371831Z" level=info msg="CreateContainer within sandbox \"f30501bfb2b078f4ca24da8f33a293bcb0a1627da463e9bb2965dbd3d6c4a855\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"3c295836945d803bb101a6f74d32c424996551c52b4cb1c9a2efec4652ac9359\"" Jan 22 00:35:07.047684 containerd[1629]: time="2026-01-22T00:35:07.047646199Z" level=info msg="StartContainer for \"3c295836945d803bb101a6f74d32c424996551c52b4cb1c9a2efec4652ac9359\"" Jan 22 00:35:07.049984 containerd[1629]: time="2026-01-22T00:35:07.049960195Z" level=info msg="connecting to shim 3c295836945d803bb101a6f74d32c424996551c52b4cb1c9a2efec4652ac9359" address="unix:///run/containerd/s/c40dd84e12ac20bc35471a930b4b3948629650c3b446e62baaf74f438f1ad257" protocol=ttrpc version=3 Jan 22 00:35:07.085099 systemd[1]: Started cri-containerd-3c295836945d803bb101a6f74d32c424996551c52b4cb1c9a2efec4652ac9359.scope - libcontainer container 3c295836945d803bb101a6f74d32c424996551c52b4cb1c9a2efec4652ac9359. Jan 22 00:35:07.143000 audit: BPF prog-id=173 op=LOAD Jan 22 00:35:07.144000 audit: BPF prog-id=174 op=LOAD Jan 22 00:35:07.144000 audit[3423]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=3266 pid=3423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:07.144000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363323935383336393435643830336262313031613666373464333263 Jan 22 00:35:07.144000 audit: BPF prog-id=174 op=UNLOAD Jan 22 00:35:07.144000 audit[3423]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3266 pid=3423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:07.144000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363323935383336393435643830336262313031613666373464333263 Jan 22 00:35:07.145000 audit: BPF prog-id=175 op=LOAD Jan 22 00:35:07.145000 audit[3423]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=3266 pid=3423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:07.145000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363323935383336393435643830336262313031613666373464333263 Jan 22 00:35:07.145000 audit: BPF prog-id=176 op=LOAD Jan 22 00:35:07.145000 audit[3423]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=3266 pid=3423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:07.145000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363323935383336393435643830336262313031613666373464333263 Jan 22 00:35:07.145000 audit: BPF prog-id=176 op=UNLOAD Jan 22 00:35:07.145000 audit[3423]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3266 pid=3423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:07.145000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363323935383336393435643830336262313031613666373464333263 Jan 22 00:35:07.145000 audit: BPF prog-id=175 op=UNLOAD Jan 22 00:35:07.145000 audit[3423]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3266 pid=3423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:07.145000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363323935383336393435643830336262313031613666373464333263 Jan 22 00:35:07.145000 audit: BPF prog-id=177 op=LOAD Jan 22 00:35:07.145000 audit[3423]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=3266 pid=3423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:07.145000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363323935383336393435643830336262313031613666373464333263 Jan 22 00:35:07.206041 containerd[1629]: time="2026-01-22T00:35:07.206002025Z" level=info msg="StartContainer for \"3c295836945d803bb101a6f74d32c424996551c52b4cb1c9a2efec4652ac9359\" returns successfully" Jan 22 00:35:07.509391 kubelet[2805]: E0122 00:35:07.509360 2805 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 22 00:35:07.589533 kubelet[2805]: E0122 00:35:07.589478 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:07.589533 kubelet[2805]: W0122 00:35:07.589524 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:07.590217 kubelet[2805]: E0122 00:35:07.589548 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:07.590217 kubelet[2805]: E0122 00:35:07.589966 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:07.590217 kubelet[2805]: W0122 00:35:07.589978 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:07.590217 kubelet[2805]: E0122 00:35:07.589992 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:07.591309 kubelet[2805]: E0122 00:35:07.590264 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:07.591309 kubelet[2805]: W0122 00:35:07.590274 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:07.591309 kubelet[2805]: E0122 00:35:07.590285 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:07.591309 kubelet[2805]: E0122 00:35:07.590799 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:07.591309 kubelet[2805]: W0122 00:35:07.590809 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:07.591309 kubelet[2805]: E0122 00:35:07.590820 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:07.591701 kubelet[2805]: E0122 00:35:07.591358 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:07.591701 kubelet[2805]: W0122 00:35:07.591368 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:07.591701 kubelet[2805]: E0122 00:35:07.591380 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:07.592065 kubelet[2805]: E0122 00:35:07.591724 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:07.592065 kubelet[2805]: W0122 00:35:07.591735 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:07.592065 kubelet[2805]: E0122 00:35:07.591747 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:07.592405 kubelet[2805]: E0122 00:35:07.592388 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:07.592405 kubelet[2805]: W0122 00:35:07.592402 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:07.592602 kubelet[2805]: E0122 00:35:07.592414 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:07.593028 kubelet[2805]: E0122 00:35:07.593004 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:07.593028 kubelet[2805]: W0122 00:35:07.593023 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:07.593151 kubelet[2805]: E0122 00:35:07.593036 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:07.593482 kubelet[2805]: E0122 00:35:07.593435 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:07.593482 kubelet[2805]: W0122 00:35:07.593453 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:07.593482 kubelet[2805]: E0122 00:35:07.593465 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:07.593931 kubelet[2805]: E0122 00:35:07.593787 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:07.593931 kubelet[2805]: W0122 00:35:07.593806 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:07.594490 kubelet[2805]: E0122 00:35:07.593818 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:07.595263 kubelet[2805]: E0122 00:35:07.595239 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:07.595263 kubelet[2805]: W0122 00:35:07.595258 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:07.595263 kubelet[2805]: E0122 00:35:07.595270 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:07.595777 kubelet[2805]: E0122 00:35:07.595755 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:07.595777 kubelet[2805]: W0122 00:35:07.595772 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:07.595885 kubelet[2805]: E0122 00:35:07.595784 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:07.596205 kubelet[2805]: E0122 00:35:07.596180 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:07.596205 kubelet[2805]: W0122 00:35:07.596198 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:07.596314 kubelet[2805]: E0122 00:35:07.596210 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:07.596792 kubelet[2805]: E0122 00:35:07.596770 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:07.596792 kubelet[2805]: W0122 00:35:07.596788 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:07.596896 kubelet[2805]: E0122 00:35:07.596801 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:07.597252 kubelet[2805]: E0122 00:35:07.597236 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:07.597252 kubelet[2805]: W0122 00:35:07.597250 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:07.597447 kubelet[2805]: E0122 00:35:07.597262 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:07.610855 kubelet[2805]: E0122 00:35:07.610818 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:07.610855 kubelet[2805]: W0122 00:35:07.610853 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:07.610951 kubelet[2805]: E0122 00:35:07.610869 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:07.611277 kubelet[2805]: E0122 00:35:07.611245 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:07.611277 kubelet[2805]: W0122 00:35:07.611275 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:07.611361 kubelet[2805]: E0122 00:35:07.611288 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:07.612119 kubelet[2805]: E0122 00:35:07.611699 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:07.612119 kubelet[2805]: W0122 00:35:07.611729 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:07.612119 kubelet[2805]: E0122 00:35:07.611741 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:07.612119 kubelet[2805]: E0122 00:35:07.612070 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:07.612119 kubelet[2805]: W0122 00:35:07.612080 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:07.612119 kubelet[2805]: E0122 00:35:07.612092 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:07.612471 kubelet[2805]: E0122 00:35:07.612453 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:07.612471 kubelet[2805]: W0122 00:35:07.612469 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:07.612558 kubelet[2805]: E0122 00:35:07.612481 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:07.612958 kubelet[2805]: E0122 00:35:07.612940 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:07.613025 kubelet[2805]: W0122 00:35:07.612956 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:07.613025 kubelet[2805]: E0122 00:35:07.613018 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:07.613606 kubelet[2805]: E0122 00:35:07.613565 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:07.613606 kubelet[2805]: W0122 00:35:07.613584 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:07.613606 kubelet[2805]: E0122 00:35:07.613595 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:07.614071 kubelet[2805]: E0122 00:35:07.614044 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:07.614116 kubelet[2805]: W0122 00:35:07.614062 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:07.614116 kubelet[2805]: E0122 00:35:07.614101 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:07.614494 kubelet[2805]: E0122 00:35:07.614465 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:07.614494 kubelet[2805]: W0122 00:35:07.614481 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:07.614494 kubelet[2805]: E0122 00:35:07.614493 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:07.614879 kubelet[2805]: E0122 00:35:07.614861 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:07.614879 kubelet[2805]: W0122 00:35:07.614877 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:07.615142 kubelet[2805]: E0122 00:35:07.614919 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:07.615575 kubelet[2805]: E0122 00:35:07.615552 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:07.615575 kubelet[2805]: W0122 00:35:07.615568 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:07.615801 kubelet[2805]: E0122 00:35:07.615579 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:07.616330 kubelet[2805]: E0122 00:35:07.616275 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:07.616330 kubelet[2805]: W0122 00:35:07.616292 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:07.616330 kubelet[2805]: E0122 00:35:07.616305 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:07.617394 kubelet[2805]: E0122 00:35:07.617366 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:07.617394 kubelet[2805]: W0122 00:35:07.617383 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:07.617484 kubelet[2805]: E0122 00:35:07.617395 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:07.618189 kubelet[2805]: E0122 00:35:07.618169 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:07.618189 kubelet[2805]: W0122 00:35:07.618186 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:07.618278 kubelet[2805]: E0122 00:35:07.618199 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:07.618667 kubelet[2805]: E0122 00:35:07.618647 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:07.618667 kubelet[2805]: W0122 00:35:07.618660 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:07.618667 kubelet[2805]: E0122 00:35:07.618679 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:07.619292 kubelet[2805]: E0122 00:35:07.619271 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:07.619292 kubelet[2805]: W0122 00:35:07.619288 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:07.619371 kubelet[2805]: E0122 00:35:07.619299 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:07.619902 kubelet[2805]: E0122 00:35:07.619881 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:07.619902 kubelet[2805]: W0122 00:35:07.619899 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:07.619982 kubelet[2805]: E0122 00:35:07.619911 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:07.620701 kubelet[2805]: E0122 00:35:07.620685 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:35:07.620758 kubelet[2805]: W0122 00:35:07.620701 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:35:07.620758 kubelet[2805]: E0122 00:35:07.620713 2805 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:35:07.785151 containerd[1629]: time="2026-01-22T00:35:07.784450223Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:35:07.786979 containerd[1629]: time="2026-01-22T00:35:07.785937805Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Jan 22 00:35:07.787587 containerd[1629]: time="2026-01-22T00:35:07.787561249Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:35:07.790560 containerd[1629]: time="2026-01-22T00:35:07.790528812Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:35:07.791711 containerd[1629]: time="2026-01-22T00:35:07.791682920Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 784.951332ms" Jan 22 00:35:07.791813 containerd[1629]: time="2026-01-22T00:35:07.791792358Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 22 00:35:07.796953 containerd[1629]: time="2026-01-22T00:35:07.796925673Z" level=info msg="CreateContainer within sandbox \"9bf6089003235d11bb9c818e5c17096a2aae0f10d61accc37fb5f497ab8d47bf\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 22 00:35:07.810536 containerd[1629]: time="2026-01-22T00:35:07.808687618Z" level=info msg="Container b6648b9e8d4e4ab6e4a3c0646c0059ec6d7dc1bcc64617e0a37d7065287dad21: CDI devices from CRI Config.CDIDevices: []" Jan 22 00:35:07.815593 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4242342977.mount: Deactivated successfully. Jan 22 00:35:07.824432 containerd[1629]: time="2026-01-22T00:35:07.824397735Z" level=info msg="CreateContainer within sandbox \"9bf6089003235d11bb9c818e5c17096a2aae0f10d61accc37fb5f497ab8d47bf\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b6648b9e8d4e4ab6e4a3c0646c0059ec6d7dc1bcc64617e0a37d7065287dad21\"" Jan 22 00:35:07.827057 containerd[1629]: time="2026-01-22T00:35:07.827027503Z" level=info msg="StartContainer for \"b6648b9e8d4e4ab6e4a3c0646c0059ec6d7dc1bcc64617e0a37d7065287dad21\"" Jan 22 00:35:07.830981 containerd[1629]: time="2026-01-22T00:35:07.830944391Z" level=info msg="connecting to shim b6648b9e8d4e4ab6e4a3c0646c0059ec6d7dc1bcc64617e0a37d7065287dad21" address="unix:///run/containerd/s/147a7a2f11ec64a1bd06d2a67bad47ca3ef29b2047e6385a9bc92fb814442eeb" protocol=ttrpc version=3 Jan 22 00:35:07.866242 systemd[1]: Started cri-containerd-b6648b9e8d4e4ab6e4a3c0646c0059ec6d7dc1bcc64617e0a37d7065287dad21.scope - libcontainer container b6648b9e8d4e4ab6e4a3c0646c0059ec6d7dc1bcc64617e0a37d7065287dad21. Jan 22 00:35:07.929000 audit: BPF prog-id=178 op=LOAD Jan 22 00:35:07.929000 audit[3498]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3340 pid=3498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:07.929000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236363438623965386434653461623665346133633036343663303035 Jan 22 00:35:07.930000 audit: BPF prog-id=179 op=LOAD Jan 22 00:35:07.930000 audit[3498]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3340 pid=3498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:07.930000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236363438623965386434653461623665346133633036343663303035 Jan 22 00:35:07.930000 audit: BPF prog-id=179 op=UNLOAD Jan 22 00:35:07.930000 audit[3498]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3340 pid=3498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:07.930000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236363438623965386434653461623665346133633036343663303035 Jan 22 00:35:07.930000 audit: BPF prog-id=178 op=UNLOAD Jan 22 00:35:07.930000 audit[3498]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3340 pid=3498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:07.930000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236363438623965386434653461623665346133633036343663303035 Jan 22 00:35:07.930000 audit: BPF prog-id=180 op=LOAD Jan 22 00:35:07.930000 audit[3498]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3340 pid=3498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:07.930000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236363438623965386434653461623665346133633036343663303035 Jan 22 00:35:07.954094 containerd[1629]: time="2026-01-22T00:35:07.954058221Z" level=info msg="StartContainer for \"b6648b9e8d4e4ab6e4a3c0646c0059ec6d7dc1bcc64617e0a37d7065287dad21\" returns successfully" Jan 22 00:35:07.976766 systemd[1]: cri-containerd-b6648b9e8d4e4ab6e4a3c0646c0059ec6d7dc1bcc64617e0a37d7065287dad21.scope: Deactivated successfully. Jan 22 00:35:07.981000 audit: BPF prog-id=180 op=UNLOAD Jan 22 00:35:07.982190 containerd[1629]: time="2026-01-22T00:35:07.982154315Z" level=info msg="received container exit event container_id:\"b6648b9e8d4e4ab6e4a3c0646c0059ec6d7dc1bcc64617e0a37d7065287dad21\" id:\"b6648b9e8d4e4ab6e4a3c0646c0059ec6d7dc1bcc64617e0a37d7065287dad21\" pid:3510 exited_at:{seconds:1769042107 nanos:981354524}" Jan 22 00:35:08.019636 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b6648b9e8d4e4ab6e4a3c0646c0059ec6d7dc1bcc64617e0a37d7065287dad21-rootfs.mount: Deactivated successfully. Jan 22 00:35:08.421406 kubelet[2805]: E0122 00:35:08.421349 2805 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wm244" podUID="ae45d206-785e-4662-9efc-4b0987941483" Jan 22 00:35:08.514811 kubelet[2805]: I0122 00:35:08.514647 2805 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 00:35:08.515343 kubelet[2805]: E0122 00:35:08.514990 2805 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 22 00:35:08.515964 kubelet[2805]: E0122 00:35:08.515647 2805 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 22 00:35:08.517748 containerd[1629]: time="2026-01-22T00:35:08.516962663Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 22 00:35:08.538559 kubelet[2805]: I0122 00:35:08.538453 2805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-55d4f8f45b-fcxw7" podStartSLOduration=3.021056495 podStartE2EDuration="4.538436068s" podCreationTimestamp="2026-01-22 00:35:04 +0000 UTC" firstStartedPulling="2026-01-22 00:35:05.48839731 +0000 UTC m=+20.210295506" lastFinishedPulling="2026-01-22 00:35:07.005776893 +0000 UTC m=+21.727675079" observedRunningTime="2026-01-22 00:35:07.532819633 +0000 UTC m=+22.254717839" watchObservedRunningTime="2026-01-22 00:35:08.538436068 +0000 UTC m=+23.260334254" Jan 22 00:35:10.422574 kubelet[2805]: E0122 00:35:10.421105 2805 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wm244" podUID="ae45d206-785e-4662-9efc-4b0987941483" Jan 22 00:35:10.949958 containerd[1629]: time="2026-01-22T00:35:10.949916202Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:35:10.950858 containerd[1629]: time="2026-01-22T00:35:10.950831233Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Jan 22 00:35:10.951388 containerd[1629]: time="2026-01-22T00:35:10.951350146Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:35:10.953397 containerd[1629]: time="2026-01-22T00:35:10.953356830Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:35:10.954569 containerd[1629]: time="2026-01-22T00:35:10.954436373Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.43727117s" Jan 22 00:35:10.954569 containerd[1629]: time="2026-01-22T00:35:10.954462227Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 22 00:35:10.959777 containerd[1629]: time="2026-01-22T00:35:10.959741734Z" level=info msg="CreateContainer within sandbox \"9bf6089003235d11bb9c818e5c17096a2aae0f10d61accc37fb5f497ab8d47bf\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 22 00:35:10.972539 containerd[1629]: time="2026-01-22T00:35:10.972398267Z" level=info msg="Container a21b3d167934a1e08b6ecdaba1367277a3ee9a4bc979d1a5999caac244b35dd3: CDI devices from CRI Config.CDIDevices: []" Jan 22 00:35:10.978134 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount676894174.mount: Deactivated successfully. Jan 22 00:35:10.984308 containerd[1629]: time="2026-01-22T00:35:10.984223011Z" level=info msg="CreateContainer within sandbox \"9bf6089003235d11bb9c818e5c17096a2aae0f10d61accc37fb5f497ab8d47bf\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a21b3d167934a1e08b6ecdaba1367277a3ee9a4bc979d1a5999caac244b35dd3\"" Jan 22 00:35:10.986186 containerd[1629]: time="2026-01-22T00:35:10.985108117Z" level=info msg="StartContainer for \"a21b3d167934a1e08b6ecdaba1367277a3ee9a4bc979d1a5999caac244b35dd3\"" Jan 22 00:35:10.987865 containerd[1629]: time="2026-01-22T00:35:10.987845305Z" level=info msg="connecting to shim a21b3d167934a1e08b6ecdaba1367277a3ee9a4bc979d1a5999caac244b35dd3" address="unix:///run/containerd/s/147a7a2f11ec64a1bd06d2a67bad47ca3ef29b2047e6385a9bc92fb814442eeb" protocol=ttrpc version=3 Jan 22 00:35:11.015796 systemd[1]: Started cri-containerd-a21b3d167934a1e08b6ecdaba1367277a3ee9a4bc979d1a5999caac244b35dd3.scope - libcontainer container a21b3d167934a1e08b6ecdaba1367277a3ee9a4bc979d1a5999caac244b35dd3. Jan 22 00:35:11.080000 audit: BPF prog-id=181 op=LOAD Jan 22 00:35:11.083140 kernel: kauditd_printk_skb: 84 callbacks suppressed Jan 22 00:35:11.083234 kernel: audit: type=1334 audit(1769042111.080:580): prog-id=181 op=LOAD Jan 22 00:35:11.080000 audit[3554]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3340 pid=3554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:11.089742 kernel: audit: type=1300 audit(1769042111.080:580): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3340 pid=3554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:11.080000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132316233643136373933346131653038623665636461626131333637 Jan 22 00:35:11.099931 kernel: audit: type=1327 audit(1769042111.080:580): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132316233643136373933346131653038623665636461626131333637 Jan 22 00:35:11.080000 audit: BPF prog-id=182 op=LOAD Jan 22 00:35:11.117881 kernel: audit: type=1334 audit(1769042111.080:581): prog-id=182 op=LOAD Jan 22 00:35:11.080000 audit[3554]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3340 pid=3554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:11.131533 kernel: audit: type=1300 audit(1769042111.080:581): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3340 pid=3554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:11.080000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132316233643136373933346131653038623665636461626131333637 Jan 22 00:35:11.144156 kernel: audit: type=1327 audit(1769042111.080:581): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132316233643136373933346131653038623665636461626131333637 Jan 22 00:35:11.144215 kernel: audit: type=1334 audit(1769042111.081:582): prog-id=182 op=UNLOAD Jan 22 00:35:11.081000 audit: BPF prog-id=182 op=UNLOAD Jan 22 00:35:11.147589 kernel: audit: type=1300 audit(1769042111.081:582): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3340 pid=3554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:11.081000 audit[3554]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3340 pid=3554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:11.081000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132316233643136373933346131653038623665636461626131333637 Jan 22 00:35:11.156157 kernel: audit: type=1327 audit(1769042111.081:582): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132316233643136373933346131653038623665636461626131333637 Jan 22 00:35:11.165750 kernel: audit: type=1334 audit(1769042111.081:583): prog-id=181 op=UNLOAD Jan 22 00:35:11.081000 audit: BPF prog-id=181 op=UNLOAD Jan 22 00:35:11.081000 audit[3554]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3340 pid=3554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:11.081000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132316233643136373933346131653038623665636461626131333637 Jan 22 00:35:11.081000 audit: BPF prog-id=183 op=LOAD Jan 22 00:35:11.081000 audit[3554]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=3340 pid=3554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:11.081000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132316233643136373933346131653038623665636461626131333637 Jan 22 00:35:11.179066 containerd[1629]: time="2026-01-22T00:35:11.178935153Z" level=info msg="StartContainer for \"a21b3d167934a1e08b6ecdaba1367277a3ee9a4bc979d1a5999caac244b35dd3\" returns successfully" Jan 22 00:35:11.529914 kubelet[2805]: E0122 00:35:11.529860 2805 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 22 00:35:11.869112 containerd[1629]: time="2026-01-22T00:35:11.868906781Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 22 00:35:11.874075 systemd[1]: cri-containerd-a21b3d167934a1e08b6ecdaba1367277a3ee9a4bc979d1a5999caac244b35dd3.scope: Deactivated successfully. Jan 22 00:35:11.875189 systemd[1]: cri-containerd-a21b3d167934a1e08b6ecdaba1367277a3ee9a4bc979d1a5999caac244b35dd3.scope: Consumed 748ms CPU time, 197.8M memory peak, 171.3M written to disk. Jan 22 00:35:11.877000 audit: BPF prog-id=183 op=UNLOAD Jan 22 00:35:11.878334 containerd[1629]: time="2026-01-22T00:35:11.878280711Z" level=info msg="received container exit event container_id:\"a21b3d167934a1e08b6ecdaba1367277a3ee9a4bc979d1a5999caac244b35dd3\" id:\"a21b3d167934a1e08b6ecdaba1367277a3ee9a4bc979d1a5999caac244b35dd3\" pid:3567 exited_at:{seconds:1769042111 nanos:876352689}" Jan 22 00:35:11.917103 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a21b3d167934a1e08b6ecdaba1367277a3ee9a4bc979d1a5999caac244b35dd3-rootfs.mount: Deactivated successfully. Jan 22 00:35:11.973556 kubelet[2805]: I0122 00:35:11.973085 2805 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Jan 22 00:35:12.020384 systemd[1]: Created slice kubepods-burstable-pod38af0d26_1bef_4093_8245_e7a246914084.slice - libcontainer container kubepods-burstable-pod38af0d26_1bef_4093_8245_e7a246914084.slice. Jan 22 00:35:12.048313 kubelet[2805]: I0122 00:35:12.048266 2805 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d4bbd0f-fe7b-41ce-884e-a153734deda0-config-volume\") pod \"coredns-66bc5c9577-v7rsm\" (UID: \"8d4bbd0f-fe7b-41ce-884e-a153734deda0\") " pod="kube-system/coredns-66bc5c9577-v7rsm" Jan 22 00:35:12.048428 kubelet[2805]: I0122 00:35:12.048326 2805 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9865m\" (UniqueName: \"kubernetes.io/projected/8d4bbd0f-fe7b-41ce-884e-a153734deda0-kube-api-access-9865m\") pod \"coredns-66bc5c9577-v7rsm\" (UID: \"8d4bbd0f-fe7b-41ce-884e-a153734deda0\") " pod="kube-system/coredns-66bc5c9577-v7rsm" Jan 22 00:35:12.048428 kubelet[2805]: I0122 00:35:12.048352 2805 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78tqn\" (UniqueName: \"kubernetes.io/projected/c579fa7a-8ee2-4338-b81e-6fc1959a328f-kube-api-access-78tqn\") pod \"calico-kube-controllers-74c55dd4c7-ljvfs\" (UID: \"c579fa7a-8ee2-4338-b81e-6fc1959a328f\") " pod="calico-system/calico-kube-controllers-74c55dd4c7-ljvfs" Jan 22 00:35:12.048428 kubelet[2805]: I0122 00:35:12.048386 2805 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c579fa7a-8ee2-4338-b81e-6fc1959a328f-tigera-ca-bundle\") pod \"calico-kube-controllers-74c55dd4c7-ljvfs\" (UID: \"c579fa7a-8ee2-4338-b81e-6fc1959a328f\") " pod="calico-system/calico-kube-controllers-74c55dd4c7-ljvfs" Jan 22 00:35:12.048428 kubelet[2805]: I0122 00:35:12.048417 2805 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gw2cm\" (UniqueName: \"kubernetes.io/projected/38af0d26-1bef-4093-8245-e7a246914084-kube-api-access-gw2cm\") pod \"coredns-66bc5c9577-b6ffb\" (UID: \"38af0d26-1bef-4093-8245-e7a246914084\") " pod="kube-system/coredns-66bc5c9577-b6ffb" Jan 22 00:35:12.048569 kubelet[2805]: I0122 00:35:12.048464 2805 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e76e7dc-ca5d-46c4-8bbe-50d71d78158f-whisker-ca-bundle\") pod \"whisker-677b84c6dd-wp8nf\" (UID: \"3e76e7dc-ca5d-46c4-8bbe-50d71d78158f\") " pod="calico-system/whisker-677b84c6dd-wp8nf" Jan 22 00:35:12.048569 kubelet[2805]: I0122 00:35:12.048495 2805 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/38af0d26-1bef-4093-8245-e7a246914084-config-volume\") pod \"coredns-66bc5c9577-b6ffb\" (UID: \"38af0d26-1bef-4093-8245-e7a246914084\") " pod="kube-system/coredns-66bc5c9577-b6ffb" Jan 22 00:35:12.048569 kubelet[2805]: I0122 00:35:12.048555 2805 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3e76e7dc-ca5d-46c4-8bbe-50d71d78158f-whisker-backend-key-pair\") pod \"whisker-677b84c6dd-wp8nf\" (UID: \"3e76e7dc-ca5d-46c4-8bbe-50d71d78158f\") " pod="calico-system/whisker-677b84c6dd-wp8nf" Jan 22 00:35:12.048642 kubelet[2805]: I0122 00:35:12.048584 2805 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stmv5\" (UniqueName: \"kubernetes.io/projected/3e76e7dc-ca5d-46c4-8bbe-50d71d78158f-kube-api-access-stmv5\") pod \"whisker-677b84c6dd-wp8nf\" (UID: \"3e76e7dc-ca5d-46c4-8bbe-50d71d78158f\") " pod="calico-system/whisker-677b84c6dd-wp8nf" Jan 22 00:35:12.049276 systemd[1]: Created slice kubepods-besteffort-pod3e76e7dc_ca5d_46c4_8bbe_50d71d78158f.slice - libcontainer container kubepods-besteffort-pod3e76e7dc_ca5d_46c4_8bbe_50d71d78158f.slice. Jan 22 00:35:12.060141 systemd[1]: Created slice kubepods-burstable-pod8d4bbd0f_fe7b_41ce_884e_a153734deda0.slice - libcontainer container kubepods-burstable-pod8d4bbd0f_fe7b_41ce_884e_a153734deda0.slice. Jan 22 00:35:12.069375 systemd[1]: Created slice kubepods-besteffort-podc579fa7a_8ee2_4338_b81e_6fc1959a328f.slice - libcontainer container kubepods-besteffort-podc579fa7a_8ee2_4338_b81e_6fc1959a328f.slice. Jan 22 00:35:12.080851 systemd[1]: Created slice kubepods-besteffort-pod53fd5176_64d2_4a70_9167_8081a837fe6e.slice - libcontainer container kubepods-besteffort-pod53fd5176_64d2_4a70_9167_8081a837fe6e.slice. Jan 22 00:35:12.087605 systemd[1]: Created slice kubepods-besteffort-podac5cf267_1601_4ae5_91e1_dc1496ea695f.slice - libcontainer container kubepods-besteffort-podac5cf267_1601_4ae5_91e1_dc1496ea695f.slice. Jan 22 00:35:12.096201 systemd[1]: Created slice kubepods-besteffort-pod7058f0b0_e750_4a4f_832e_cf58713e25a5.slice - libcontainer container kubepods-besteffort-pod7058f0b0_e750_4a4f_832e_cf58713e25a5.slice. Jan 22 00:35:12.149822 kubelet[2805]: I0122 00:35:12.149711 2805 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9txkz\" (UniqueName: \"kubernetes.io/projected/53fd5176-64d2-4a70-9167-8081a837fe6e-kube-api-access-9txkz\") pod \"calico-apiserver-bbb7b878c-n8l2v\" (UID: \"53fd5176-64d2-4a70-9167-8081a837fe6e\") " pod="calico-apiserver/calico-apiserver-bbb7b878c-n8l2v" Jan 22 00:35:12.149822 kubelet[2805]: I0122 00:35:12.149756 2805 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tm9ml\" (UniqueName: \"kubernetes.io/projected/ac5cf267-1601-4ae5-91e1-dc1496ea695f-kube-api-access-tm9ml\") pod \"calico-apiserver-bbb7b878c-c279j\" (UID: \"ac5cf267-1601-4ae5-91e1-dc1496ea695f\") " pod="calico-apiserver/calico-apiserver-bbb7b878c-c279j" Jan 22 00:35:12.151196 kubelet[2805]: I0122 00:35:12.149801 2805 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7058f0b0-e750-4a4f-832e-cf58713e25a5-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-jv75z\" (UID: \"7058f0b0-e750-4a4f-832e-cf58713e25a5\") " pod="calico-system/goldmane-7c778bb748-jv75z" Jan 22 00:35:12.151309 kubelet[2805]: I0122 00:35:12.151201 2805 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ac5cf267-1601-4ae5-91e1-dc1496ea695f-calico-apiserver-certs\") pod \"calico-apiserver-bbb7b878c-c279j\" (UID: \"ac5cf267-1601-4ae5-91e1-dc1496ea695f\") " pod="calico-apiserver/calico-apiserver-bbb7b878c-c279j" Jan 22 00:35:12.151309 kubelet[2805]: I0122 00:35:12.151254 2805 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/7058f0b0-e750-4a4f-832e-cf58713e25a5-goldmane-key-pair\") pod \"goldmane-7c778bb748-jv75z\" (UID: \"7058f0b0-e750-4a4f-832e-cf58713e25a5\") " pod="calico-system/goldmane-7c778bb748-jv75z" Jan 22 00:35:12.151359 kubelet[2805]: I0122 00:35:12.151320 2805 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/53fd5176-64d2-4a70-9167-8081a837fe6e-calico-apiserver-certs\") pod \"calico-apiserver-bbb7b878c-n8l2v\" (UID: \"53fd5176-64d2-4a70-9167-8081a837fe6e\") " pod="calico-apiserver/calico-apiserver-bbb7b878c-n8l2v" Jan 22 00:35:12.151383 kubelet[2805]: I0122 00:35:12.151362 2805 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7058f0b0-e750-4a4f-832e-cf58713e25a5-config\") pod \"goldmane-7c778bb748-jv75z\" (UID: \"7058f0b0-e750-4a4f-832e-cf58713e25a5\") " pod="calico-system/goldmane-7c778bb748-jv75z" Jan 22 00:35:12.151404 kubelet[2805]: I0122 00:35:12.151384 2805 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26jn8\" (UniqueName: \"kubernetes.io/projected/7058f0b0-e750-4a4f-832e-cf58713e25a5-kube-api-access-26jn8\") pod \"goldmane-7c778bb748-jv75z\" (UID: \"7058f0b0-e750-4a4f-832e-cf58713e25a5\") " pod="calico-system/goldmane-7c778bb748-jv75z" Jan 22 00:35:12.342470 kubelet[2805]: E0122 00:35:12.342438 2805 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 22 00:35:12.343306 containerd[1629]: time="2026-01-22T00:35:12.343254165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-b6ffb,Uid:38af0d26-1bef-4093-8245-e7a246914084,Namespace:kube-system,Attempt:0,}" Jan 22 00:35:12.356975 containerd[1629]: time="2026-01-22T00:35:12.356883952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-677b84c6dd-wp8nf,Uid:3e76e7dc-ca5d-46c4-8bbe-50d71d78158f,Namespace:calico-system,Attempt:0,}" Jan 22 00:35:12.365475 kubelet[2805]: E0122 00:35:12.365449 2805 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 22 00:35:12.366389 containerd[1629]: time="2026-01-22T00:35:12.366363371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-v7rsm,Uid:8d4bbd0f-fe7b-41ce-884e-a153734deda0,Namespace:kube-system,Attempt:0,}" Jan 22 00:35:12.375624 containerd[1629]: time="2026-01-22T00:35:12.375572936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74c55dd4c7-ljvfs,Uid:c579fa7a-8ee2-4338-b81e-6fc1959a328f,Namespace:calico-system,Attempt:0,}" Jan 22 00:35:12.398883 containerd[1629]: time="2026-01-22T00:35:12.398822630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bbb7b878c-n8l2v,Uid:53fd5176-64d2-4a70-9167-8081a837fe6e,Namespace:calico-apiserver,Attempt:0,}" Jan 22 00:35:12.401870 containerd[1629]: time="2026-01-22T00:35:12.401126678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bbb7b878c-c279j,Uid:ac5cf267-1601-4ae5-91e1-dc1496ea695f,Namespace:calico-apiserver,Attempt:0,}" Jan 22 00:35:12.404998 containerd[1629]: time="2026-01-22T00:35:12.404898407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-jv75z,Uid:7058f0b0-e750-4a4f-832e-cf58713e25a5,Namespace:calico-system,Attempt:0,}" Jan 22 00:35:12.429544 systemd[1]: Created slice kubepods-besteffort-podae45d206_785e_4662_9efc_4b0987941483.slice - libcontainer container kubepods-besteffort-podae45d206_785e_4662_9efc_4b0987941483.slice. Jan 22 00:35:12.435899 containerd[1629]: time="2026-01-22T00:35:12.435866173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wm244,Uid:ae45d206-785e-4662-9efc-4b0987941483,Namespace:calico-system,Attempt:0,}" Jan 22 00:35:12.545198 kubelet[2805]: E0122 00:35:12.544810 2805 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 22 00:35:12.563581 containerd[1629]: time="2026-01-22T00:35:12.563215744Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 22 00:35:12.603773 containerd[1629]: time="2026-01-22T00:35:12.603725897Z" level=error msg="Failed to destroy network for sandbox \"5a2f3e8aa53829649561c5ba197b753b440d6c94a014af91826eac980e513051\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:35:12.606233 containerd[1629]: time="2026-01-22T00:35:12.606201017Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74c55dd4c7-ljvfs,Uid:c579fa7a-8ee2-4338-b81e-6fc1959a328f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a2f3e8aa53829649561c5ba197b753b440d6c94a014af91826eac980e513051\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:35:12.606827 kubelet[2805]: E0122 00:35:12.606675 2805 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a2f3e8aa53829649561c5ba197b753b440d6c94a014af91826eac980e513051\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:35:12.606978 kubelet[2805]: E0122 00:35:12.606954 2805 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a2f3e8aa53829649561c5ba197b753b440d6c94a014af91826eac980e513051\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-74c55dd4c7-ljvfs" Jan 22 00:35:12.607357 kubelet[2805]: E0122 00:35:12.607049 2805 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a2f3e8aa53829649561c5ba197b753b440d6c94a014af91826eac980e513051\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-74c55dd4c7-ljvfs" Jan 22 00:35:12.607357 kubelet[2805]: E0122 00:35:12.607117 2805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-74c55dd4c7-ljvfs_calico-system(c579fa7a-8ee2-4338-b81e-6fc1959a328f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-74c55dd4c7-ljvfs_calico-system(c579fa7a-8ee2-4338-b81e-6fc1959a328f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5a2f3e8aa53829649561c5ba197b753b440d6c94a014af91826eac980e513051\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-74c55dd4c7-ljvfs" podUID="c579fa7a-8ee2-4338-b81e-6fc1959a328f" Jan 22 00:35:12.614685 containerd[1629]: time="2026-01-22T00:35:12.614658014Z" level=error msg="Failed to destroy network for sandbox \"b38a0631cf106f81f9c84635eed78355be673522b2c67e62433c330db6805108\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:35:12.632311 containerd[1629]: time="2026-01-22T00:35:12.631739349Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-b6ffb,Uid:38af0d26-1bef-4093-8245-e7a246914084,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b38a0631cf106f81f9c84635eed78355be673522b2c67e62433c330db6805108\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:35:12.632433 kubelet[2805]: E0122 00:35:12.631937 2805 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b38a0631cf106f81f9c84635eed78355be673522b2c67e62433c330db6805108\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:35:12.632433 kubelet[2805]: E0122 00:35:12.631990 2805 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b38a0631cf106f81f9c84635eed78355be673522b2c67e62433c330db6805108\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-b6ffb" Jan 22 00:35:12.632433 kubelet[2805]: E0122 00:35:12.632014 2805 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b38a0631cf106f81f9c84635eed78355be673522b2c67e62433c330db6805108\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-b6ffb" Jan 22 00:35:12.632583 kubelet[2805]: E0122 00:35:12.632067 2805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-b6ffb_kube-system(38af0d26-1bef-4093-8245-e7a246914084)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-b6ffb_kube-system(38af0d26-1bef-4093-8245-e7a246914084)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b38a0631cf106f81f9c84635eed78355be673522b2c67e62433c330db6805108\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-b6ffb" podUID="38af0d26-1bef-4093-8245-e7a246914084" Jan 22 00:35:12.644293 containerd[1629]: time="2026-01-22T00:35:12.644145317Z" level=error msg="Failed to destroy network for sandbox \"4e418c2f1052c52cbd60867553b6d3ff8db2d33710f2fd9d613616801a1f5009\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:35:12.649017 containerd[1629]: time="2026-01-22T00:35:12.648962771Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-v7rsm,Uid:8d4bbd0f-fe7b-41ce-884e-a153734deda0,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e418c2f1052c52cbd60867553b6d3ff8db2d33710f2fd9d613616801a1f5009\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:35:12.649281 kubelet[2805]: E0122 00:35:12.649149 2805 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e418c2f1052c52cbd60867553b6d3ff8db2d33710f2fd9d613616801a1f5009\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:35:12.649281 kubelet[2805]: E0122 00:35:12.649191 2805 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e418c2f1052c52cbd60867553b6d3ff8db2d33710f2fd9d613616801a1f5009\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-v7rsm" Jan 22 00:35:12.649281 kubelet[2805]: E0122 00:35:12.649224 2805 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e418c2f1052c52cbd60867553b6d3ff8db2d33710f2fd9d613616801a1f5009\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-v7rsm" Jan 22 00:35:12.649466 kubelet[2805]: E0122 00:35:12.649284 2805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-v7rsm_kube-system(8d4bbd0f-fe7b-41ce-884e-a153734deda0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-v7rsm_kube-system(8d4bbd0f-fe7b-41ce-884e-a153734deda0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4e418c2f1052c52cbd60867553b6d3ff8db2d33710f2fd9d613616801a1f5009\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-v7rsm" podUID="8d4bbd0f-fe7b-41ce-884e-a153734deda0" Jan 22 00:35:12.663677 containerd[1629]: time="2026-01-22T00:35:12.663580626Z" level=error msg="Failed to destroy network for sandbox \"9a81fc5e222334018a68398ffaa8c921f544edb6c4483abb1fc7217c55bf01cb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:35:12.665979 containerd[1629]: time="2026-01-22T00:35:12.665936392Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bbb7b878c-c279j,Uid:ac5cf267-1601-4ae5-91e1-dc1496ea695f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a81fc5e222334018a68398ffaa8c921f544edb6c4483abb1fc7217c55bf01cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:35:12.666201 kubelet[2805]: E0122 00:35:12.666174 2805 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a81fc5e222334018a68398ffaa8c921f544edb6c4483abb1fc7217c55bf01cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:35:12.666326 kubelet[2805]: E0122 00:35:12.666302 2805 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a81fc5e222334018a68398ffaa8c921f544edb6c4483abb1fc7217c55bf01cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bbb7b878c-c279j" Jan 22 00:35:12.666442 kubelet[2805]: E0122 00:35:12.666419 2805 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a81fc5e222334018a68398ffaa8c921f544edb6c4483abb1fc7217c55bf01cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bbb7b878c-c279j" Jan 22 00:35:12.666586 kubelet[2805]: E0122 00:35:12.666557 2805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-bbb7b878c-c279j_calico-apiserver(ac5cf267-1601-4ae5-91e1-dc1496ea695f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-bbb7b878c-c279j_calico-apiserver(ac5cf267-1601-4ae5-91e1-dc1496ea695f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9a81fc5e222334018a68398ffaa8c921f544edb6c4483abb1fc7217c55bf01cb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-bbb7b878c-c279j" podUID="ac5cf267-1601-4ae5-91e1-dc1496ea695f" Jan 22 00:35:12.695023 containerd[1629]: time="2026-01-22T00:35:12.694965016Z" level=error msg="Failed to destroy network for sandbox \"170e0948a8ae5dbec29523a3c565b3a0bb32fe8ce156132c3e64e8ed8327dcf8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:35:12.697156 containerd[1629]: time="2026-01-22T00:35:12.696983567Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bbb7b878c-n8l2v,Uid:53fd5176-64d2-4a70-9167-8081a837fe6e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"170e0948a8ae5dbec29523a3c565b3a0bb32fe8ce156132c3e64e8ed8327dcf8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:35:12.697489 kubelet[2805]: E0122 00:35:12.697437 2805 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"170e0948a8ae5dbec29523a3c565b3a0bb32fe8ce156132c3e64e8ed8327dcf8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:35:12.697569 kubelet[2805]: E0122 00:35:12.697544 2805 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"170e0948a8ae5dbec29523a3c565b3a0bb32fe8ce156132c3e64e8ed8327dcf8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bbb7b878c-n8l2v" Jan 22 00:35:12.697610 kubelet[2805]: E0122 00:35:12.697574 2805 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"170e0948a8ae5dbec29523a3c565b3a0bb32fe8ce156132c3e64e8ed8327dcf8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bbb7b878c-n8l2v" Jan 22 00:35:12.698243 kubelet[2805]: E0122 00:35:12.697639 2805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-bbb7b878c-n8l2v_calico-apiserver(53fd5176-64d2-4a70-9167-8081a837fe6e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-bbb7b878c-n8l2v_calico-apiserver(53fd5176-64d2-4a70-9167-8081a837fe6e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"170e0948a8ae5dbec29523a3c565b3a0bb32fe8ce156132c3e64e8ed8327dcf8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-bbb7b878c-n8l2v" podUID="53fd5176-64d2-4a70-9167-8081a837fe6e" Jan 22 00:35:12.698579 containerd[1629]: time="2026-01-22T00:35:12.698549441Z" level=error msg="Failed to destroy network for sandbox \"cd8698b0f2984b26a4c0be83c3f8401cf7297379aed7ef1b43081c75cd3bd18a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:35:12.700658 containerd[1629]: time="2026-01-22T00:35:12.700603747Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-677b84c6dd-wp8nf,Uid:3e76e7dc-ca5d-46c4-8bbe-50d71d78158f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd8698b0f2984b26a4c0be83c3f8401cf7297379aed7ef1b43081c75cd3bd18a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:35:12.701024 kubelet[2805]: E0122 00:35:12.700960 2805 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd8698b0f2984b26a4c0be83c3f8401cf7297379aed7ef1b43081c75cd3bd18a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:35:12.701084 kubelet[2805]: E0122 00:35:12.701034 2805 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd8698b0f2984b26a4c0be83c3f8401cf7297379aed7ef1b43081c75cd3bd18a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-677b84c6dd-wp8nf" Jan 22 00:35:12.701084 kubelet[2805]: E0122 00:35:12.701058 2805 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd8698b0f2984b26a4c0be83c3f8401cf7297379aed7ef1b43081c75cd3bd18a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-677b84c6dd-wp8nf" Jan 22 00:35:12.701350 kubelet[2805]: E0122 00:35:12.701315 2805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-677b84c6dd-wp8nf_calico-system(3e76e7dc-ca5d-46c4-8bbe-50d71d78158f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-677b84c6dd-wp8nf_calico-system(3e76e7dc-ca5d-46c4-8bbe-50d71d78158f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cd8698b0f2984b26a4c0be83c3f8401cf7297379aed7ef1b43081c75cd3bd18a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-677b84c6dd-wp8nf" podUID="3e76e7dc-ca5d-46c4-8bbe-50d71d78158f" Jan 22 00:35:12.704343 containerd[1629]: time="2026-01-22T00:35:12.704310778Z" level=error msg="Failed to destroy network for sandbox \"dc481f603d5b3b5b41a6baf53ed45a61132abe0055c01140fd6456c1d1d47fb5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:35:12.706929 containerd[1629]: time="2026-01-22T00:35:12.706896643Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-jv75z,Uid:7058f0b0-e750-4a4f-832e-cf58713e25a5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc481f603d5b3b5b41a6baf53ed45a61132abe0055c01140fd6456c1d1d47fb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:35:12.707558 kubelet[2805]: E0122 00:35:12.707468 2805 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc481f603d5b3b5b41a6baf53ed45a61132abe0055c01140fd6456c1d1d47fb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:35:12.707723 kubelet[2805]: E0122 00:35:12.707661 2805 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc481f603d5b3b5b41a6baf53ed45a61132abe0055c01140fd6456c1d1d47fb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-jv75z" Jan 22 00:35:12.707817 kubelet[2805]: E0122 00:35:12.707794 2805 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc481f603d5b3b5b41a6baf53ed45a61132abe0055c01140fd6456c1d1d47fb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-jv75z" Jan 22 00:35:12.708240 kubelet[2805]: E0122 00:35:12.707925 2805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-jv75z_calico-system(7058f0b0-e750-4a4f-832e-cf58713e25a5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-jv75z_calico-system(7058f0b0-e750-4a4f-832e-cf58713e25a5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dc481f603d5b3b5b41a6baf53ed45a61132abe0055c01140fd6456c1d1d47fb5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-jv75z" podUID="7058f0b0-e750-4a4f-832e-cf58713e25a5" Jan 22 00:35:12.723693 containerd[1629]: time="2026-01-22T00:35:12.723661096Z" level=error msg="Failed to destroy network for sandbox \"3d1519ef8ca18162b2744bf3c54b88e0c425b5dc39c7f2ad823b24543e8d56c7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:35:12.725019 containerd[1629]: time="2026-01-22T00:35:12.724981097Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wm244,Uid:ae45d206-785e-4662-9efc-4b0987941483,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d1519ef8ca18162b2744bf3c54b88e0c425b5dc39c7f2ad823b24543e8d56c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:35:12.725228 kubelet[2805]: E0122 00:35:12.725200 2805 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d1519ef8ca18162b2744bf3c54b88e0c425b5dc39c7f2ad823b24543e8d56c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:35:12.725297 kubelet[2805]: E0122 00:35:12.725245 2805 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d1519ef8ca18162b2744bf3c54b88e0c425b5dc39c7f2ad823b24543e8d56c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wm244" Jan 22 00:35:12.725343 kubelet[2805]: E0122 00:35:12.725298 2805 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d1519ef8ca18162b2744bf3c54b88e0c425b5dc39c7f2ad823b24543e8d56c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wm244" Jan 22 00:35:12.725417 kubelet[2805]: E0122 00:35:12.725387 2805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wm244_calico-system(ae45d206-785e-4662-9efc-4b0987941483)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wm244_calico-system(ae45d206-785e-4662-9efc-4b0987941483)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3d1519ef8ca18162b2744bf3c54b88e0c425b5dc39c7f2ad823b24543e8d56c7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wm244" podUID="ae45d206-785e-4662-9efc-4b0987941483" Jan 22 00:35:17.084396 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3603078063.mount: Deactivated successfully. Jan 22 00:35:17.126690 containerd[1629]: time="2026-01-22T00:35:17.126627202Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:35:17.128168 containerd[1629]: time="2026-01-22T00:35:17.128011948Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Jan 22 00:35:17.128747 containerd[1629]: time="2026-01-22T00:35:17.128711792Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:35:17.130874 containerd[1629]: time="2026-01-22T00:35:17.130841557Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:35:17.132111 containerd[1629]: time="2026-01-22T00:35:17.131976987Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 4.568716418s" Jan 22 00:35:17.132111 containerd[1629]: time="2026-01-22T00:35:17.132014381Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 22 00:35:17.159147 containerd[1629]: time="2026-01-22T00:35:17.159086351Z" level=info msg="CreateContainer within sandbox \"9bf6089003235d11bb9c818e5c17096a2aae0f10d61accc37fb5f497ab8d47bf\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 22 00:35:17.166724 containerd[1629]: time="2026-01-22T00:35:17.166688095Z" level=info msg="Container 4e6ae31d2788b051aa503d7d0b0e805d5195c11c94e20120e08fbbabd5a1d2c8: CDI devices from CRI Config.CDIDevices: []" Jan 22 00:35:17.180896 containerd[1629]: time="2026-01-22T00:35:17.180858162Z" level=info msg="CreateContainer within sandbox \"9bf6089003235d11bb9c818e5c17096a2aae0f10d61accc37fb5f497ab8d47bf\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"4e6ae31d2788b051aa503d7d0b0e805d5195c11c94e20120e08fbbabd5a1d2c8\"" Jan 22 00:35:17.182336 containerd[1629]: time="2026-01-22T00:35:17.181396729Z" level=info msg="StartContainer for \"4e6ae31d2788b051aa503d7d0b0e805d5195c11c94e20120e08fbbabd5a1d2c8\"" Jan 22 00:35:17.184645 containerd[1629]: time="2026-01-22T00:35:17.184567144Z" level=info msg="connecting to shim 4e6ae31d2788b051aa503d7d0b0e805d5195c11c94e20120e08fbbabd5a1d2c8" address="unix:///run/containerd/s/147a7a2f11ec64a1bd06d2a67bad47ca3ef29b2047e6385a9bc92fb814442eeb" protocol=ttrpc version=3 Jan 22 00:35:17.255716 systemd[1]: Started cri-containerd-4e6ae31d2788b051aa503d7d0b0e805d5195c11c94e20120e08fbbabd5a1d2c8.scope - libcontainer container 4e6ae31d2788b051aa503d7d0b0e805d5195c11c94e20120e08fbbabd5a1d2c8. Jan 22 00:35:17.343061 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 22 00:35:17.343200 kernel: audit: type=1334 audit(1769042117.337:586): prog-id=184 op=LOAD Jan 22 00:35:17.337000 audit: BPF prog-id=184 op=LOAD Jan 22 00:35:17.337000 audit[3821]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3340 pid=3821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:17.356558 kernel: audit: type=1300 audit(1769042117.337:586): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3340 pid=3821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:17.356647 kernel: audit: type=1327 audit(1769042117.337:586): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465366165333164323738386230353161613530336437643062306538 Jan 22 00:35:17.337000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465366165333164323738386230353161613530336437643062306538 Jan 22 00:35:17.341000 audit: BPF prog-id=185 op=LOAD Jan 22 00:35:17.368534 kernel: audit: type=1334 audit(1769042117.341:587): prog-id=185 op=LOAD Jan 22 00:35:17.341000 audit[3821]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3340 pid=3821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:17.341000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465366165333164323738386230353161613530336437643062306538 Jan 22 00:35:17.380558 kernel: audit: type=1300 audit(1769042117.341:587): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3340 pid=3821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:17.380634 kernel: audit: type=1327 audit(1769042117.341:587): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465366165333164323738386230353161613530336437643062306538 Jan 22 00:35:17.341000 audit: BPF prog-id=185 op=UNLOAD Jan 22 00:35:17.341000 audit[3821]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3340 pid=3821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:17.396306 kernel: audit: type=1334 audit(1769042117.341:588): prog-id=185 op=UNLOAD Jan 22 00:35:17.396369 kernel: audit: type=1300 audit(1769042117.341:588): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3340 pid=3821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:17.341000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465366165333164323738386230353161613530336437643062306538 Jan 22 00:35:17.404739 kernel: audit: type=1327 audit(1769042117.341:588): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465366165333164323738386230353161613530336437643062306538 Jan 22 00:35:17.414538 kernel: audit: type=1334 audit(1769042117.341:589): prog-id=184 op=UNLOAD Jan 22 00:35:17.341000 audit: BPF prog-id=184 op=UNLOAD Jan 22 00:35:17.341000 audit[3821]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3340 pid=3821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:17.341000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465366165333164323738386230353161613530336437643062306538 Jan 22 00:35:17.341000 audit: BPF prog-id=186 op=LOAD Jan 22 00:35:17.341000 audit[3821]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3340 pid=3821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:17.341000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465366165333164323738386230353161613530336437643062306538 Jan 22 00:35:17.418409 containerd[1629]: time="2026-01-22T00:35:17.418376220Z" level=info msg="StartContainer for \"4e6ae31d2788b051aa503d7d0b0e805d5195c11c94e20120e08fbbabd5a1d2c8\" returns successfully" Jan 22 00:35:17.527418 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 22 00:35:17.527595 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 22 00:35:17.571075 kubelet[2805]: E0122 00:35:17.570701 2805 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 22 00:35:17.605888 kubelet[2805]: I0122 00:35:17.604167 2805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-48nss" podStartSLOduration=2.011956667 podStartE2EDuration="13.60415009s" podCreationTimestamp="2026-01-22 00:35:04 +0000 UTC" firstStartedPulling="2026-01-22 00:35:05.540716182 +0000 UTC m=+20.262614378" lastFinishedPulling="2026-01-22 00:35:17.132909615 +0000 UTC m=+31.854807801" observedRunningTime="2026-01-22 00:35:17.600026374 +0000 UTC m=+32.321924560" watchObservedRunningTime="2026-01-22 00:35:17.60415009 +0000 UTC m=+32.326048296" Jan 22 00:35:17.792289 kubelet[2805]: I0122 00:35:17.792243 2805 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3e76e7dc-ca5d-46c4-8bbe-50d71d78158f-whisker-backend-key-pair\") pod \"3e76e7dc-ca5d-46c4-8bbe-50d71d78158f\" (UID: \"3e76e7dc-ca5d-46c4-8bbe-50d71d78158f\") " Jan 22 00:35:17.792786 kubelet[2805]: I0122 00:35:17.792312 2805 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-stmv5\" (UniqueName: \"kubernetes.io/projected/3e76e7dc-ca5d-46c4-8bbe-50d71d78158f-kube-api-access-stmv5\") pod \"3e76e7dc-ca5d-46c4-8bbe-50d71d78158f\" (UID: \"3e76e7dc-ca5d-46c4-8bbe-50d71d78158f\") " Jan 22 00:35:17.792786 kubelet[2805]: I0122 00:35:17.792352 2805 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e76e7dc-ca5d-46c4-8bbe-50d71d78158f-whisker-ca-bundle\") pod \"3e76e7dc-ca5d-46c4-8bbe-50d71d78158f\" (UID: \"3e76e7dc-ca5d-46c4-8bbe-50d71d78158f\") " Jan 22 00:35:17.798124 kubelet[2805]: I0122 00:35:17.798035 2805 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e76e7dc-ca5d-46c4-8bbe-50d71d78158f-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "3e76e7dc-ca5d-46c4-8bbe-50d71d78158f" (UID: "3e76e7dc-ca5d-46c4-8bbe-50d71d78158f"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 00:35:17.800612 kubelet[2805]: I0122 00:35:17.800580 2805 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e76e7dc-ca5d-46c4-8bbe-50d71d78158f-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "3e76e7dc-ca5d-46c4-8bbe-50d71d78158f" (UID: "3e76e7dc-ca5d-46c4-8bbe-50d71d78158f"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 00:35:17.805258 kubelet[2805]: I0122 00:35:17.805225 2805 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e76e7dc-ca5d-46c4-8bbe-50d71d78158f-kube-api-access-stmv5" (OuterVolumeSpecName: "kube-api-access-stmv5") pod "3e76e7dc-ca5d-46c4-8bbe-50d71d78158f" (UID: "3e76e7dc-ca5d-46c4-8bbe-50d71d78158f"). InnerVolumeSpecName "kube-api-access-stmv5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 00:35:17.895693 kubelet[2805]: I0122 00:35:17.895502 2805 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3e76e7dc-ca5d-46c4-8bbe-50d71d78158f-whisker-backend-key-pair\") on node \"172-232-4-171\" DevicePath \"\"" Jan 22 00:35:17.895693 kubelet[2805]: I0122 00:35:17.895569 2805 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-stmv5\" (UniqueName: \"kubernetes.io/projected/3e76e7dc-ca5d-46c4-8bbe-50d71d78158f-kube-api-access-stmv5\") on node \"172-232-4-171\" DevicePath \"\"" Jan 22 00:35:17.895693 kubelet[2805]: I0122 00:35:17.895585 2805 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e76e7dc-ca5d-46c4-8bbe-50d71d78158f-whisker-ca-bundle\") on node \"172-232-4-171\" DevicePath \"\"" Jan 22 00:35:18.086659 systemd[1]: var-lib-kubelet-pods-3e76e7dc\x2dca5d\x2d46c4\x2d8bbe\x2d50d71d78158f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dstmv5.mount: Deactivated successfully. Jan 22 00:35:18.086801 systemd[1]: var-lib-kubelet-pods-3e76e7dc\x2dca5d\x2d46c4\x2d8bbe\x2d50d71d78158f-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 22 00:35:18.571538 kubelet[2805]: E0122 00:35:18.571158 2805 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 22 00:35:18.580716 systemd[1]: Removed slice kubepods-besteffort-pod3e76e7dc_ca5d_46c4_8bbe_50d71d78158f.slice - libcontainer container kubepods-besteffort-pod3e76e7dc_ca5d_46c4_8bbe_50d71d78158f.slice. Jan 22 00:35:18.666762 systemd[1]: Created slice kubepods-besteffort-pod8ecd20e5_2e31_4297_a10a_ea50808543e7.slice - libcontainer container kubepods-besteffort-pod8ecd20e5_2e31_4297_a10a_ea50808543e7.slice. Jan 22 00:35:18.702027 kubelet[2805]: I0122 00:35:18.701970 2805 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mx4jf\" (UniqueName: \"kubernetes.io/projected/8ecd20e5-2e31-4297-a10a-ea50808543e7-kube-api-access-mx4jf\") pod \"whisker-687c765d98-jfjzf\" (UID: \"8ecd20e5-2e31-4297-a10a-ea50808543e7\") " pod="calico-system/whisker-687c765d98-jfjzf" Jan 22 00:35:18.702169 kubelet[2805]: I0122 00:35:18.702030 2805 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8ecd20e5-2e31-4297-a10a-ea50808543e7-whisker-backend-key-pair\") pod \"whisker-687c765d98-jfjzf\" (UID: \"8ecd20e5-2e31-4297-a10a-ea50808543e7\") " pod="calico-system/whisker-687c765d98-jfjzf" Jan 22 00:35:18.702169 kubelet[2805]: I0122 00:35:18.702067 2805 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8ecd20e5-2e31-4297-a10a-ea50808543e7-whisker-ca-bundle\") pod \"whisker-687c765d98-jfjzf\" (UID: \"8ecd20e5-2e31-4297-a10a-ea50808543e7\") " pod="calico-system/whisker-687c765d98-jfjzf" Jan 22 00:35:18.975911 containerd[1629]: time="2026-01-22T00:35:18.975833802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-687c765d98-jfjzf,Uid:8ecd20e5-2e31-4297-a10a-ea50808543e7,Namespace:calico-system,Attempt:0,}" Jan 22 00:35:19.149684 systemd-networkd[1515]: calida957aaad48: Link UP Jan 22 00:35:19.150864 systemd-networkd[1515]: calida957aaad48: Gained carrier Jan 22 00:35:19.177838 containerd[1629]: 2026-01-22 00:35:19.013 [INFO][3931] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 22 00:35:19.177838 containerd[1629]: 2026-01-22 00:35:19.054 [INFO][3931] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--232--4--171-k8s-whisker--687c765d98--jfjzf-eth0 whisker-687c765d98- calico-system 8ecd20e5-2e31-4297-a10a-ea50808543e7 896 0 2026-01-22 00:35:18 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:687c765d98 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s 172-232-4-171 whisker-687c765d98-jfjzf eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calida957aaad48 [] [] }} ContainerID="8e66bf800741a5ad9dc2c0d190c20c911d1dceecde13855f674db7a4a2e46bc2" Namespace="calico-system" Pod="whisker-687c765d98-jfjzf" WorkloadEndpoint="172--232--4--171-k8s-whisker--687c765d98--jfjzf-" Jan 22 00:35:19.177838 containerd[1629]: 2026-01-22 00:35:19.054 [INFO][3931] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8e66bf800741a5ad9dc2c0d190c20c911d1dceecde13855f674db7a4a2e46bc2" Namespace="calico-system" Pod="whisker-687c765d98-jfjzf" WorkloadEndpoint="172--232--4--171-k8s-whisker--687c765d98--jfjzf-eth0" Jan 22 00:35:19.177838 containerd[1629]: 2026-01-22 00:35:19.091 [INFO][3942] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8e66bf800741a5ad9dc2c0d190c20c911d1dceecde13855f674db7a4a2e46bc2" HandleID="k8s-pod-network.8e66bf800741a5ad9dc2c0d190c20c911d1dceecde13855f674db7a4a2e46bc2" Workload="172--232--4--171-k8s-whisker--687c765d98--jfjzf-eth0" Jan 22 00:35:19.178071 containerd[1629]: 2026-01-22 00:35:19.092 [INFO][3942] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8e66bf800741a5ad9dc2c0d190c20c911d1dceecde13855f674db7a4a2e46bc2" HandleID="k8s-pod-network.8e66bf800741a5ad9dc2c0d190c20c911d1dceecde13855f674db7a4a2e46bc2" Workload="172--232--4--171-k8s-whisker--687c765d98--jfjzf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5020), Attrs:map[string]string{"namespace":"calico-system", "node":"172-232-4-171", "pod":"whisker-687c765d98-jfjzf", "timestamp":"2026-01-22 00:35:19.091913447 +0000 UTC"}, Hostname:"172-232-4-171", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 22 00:35:19.178071 containerd[1629]: 2026-01-22 00:35:19.092 [INFO][3942] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 22 00:35:19.178071 containerd[1629]: 2026-01-22 00:35:19.092 [INFO][3942] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 22 00:35:19.178071 containerd[1629]: 2026-01-22 00:35:19.092 [INFO][3942] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-232-4-171' Jan 22 00:35:19.178071 containerd[1629]: 2026-01-22 00:35:19.100 [INFO][3942] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8e66bf800741a5ad9dc2c0d190c20c911d1dceecde13855f674db7a4a2e46bc2" host="172-232-4-171" Jan 22 00:35:19.178071 containerd[1629]: 2026-01-22 00:35:19.105 [INFO][3942] ipam/ipam.go 394: Looking up existing affinities for host host="172-232-4-171" Jan 22 00:35:19.178071 containerd[1629]: 2026-01-22 00:35:19.110 [INFO][3942] ipam/ipam.go 511: Trying affinity for 192.168.71.64/26 host="172-232-4-171" Jan 22 00:35:19.178071 containerd[1629]: 2026-01-22 00:35:19.112 [INFO][3942] ipam/ipam.go 158: Attempting to load block cidr=192.168.71.64/26 host="172-232-4-171" Jan 22 00:35:19.178071 containerd[1629]: 2026-01-22 00:35:19.114 [INFO][3942] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.71.64/26 host="172-232-4-171" Jan 22 00:35:19.178071 containerd[1629]: 2026-01-22 00:35:19.114 [INFO][3942] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.71.64/26 handle="k8s-pod-network.8e66bf800741a5ad9dc2c0d190c20c911d1dceecde13855f674db7a4a2e46bc2" host="172-232-4-171" Jan 22 00:35:19.178287 containerd[1629]: 2026-01-22 00:35:19.117 [INFO][3942] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8e66bf800741a5ad9dc2c0d190c20c911d1dceecde13855f674db7a4a2e46bc2 Jan 22 00:35:19.178287 containerd[1629]: 2026-01-22 00:35:19.121 [INFO][3942] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.71.64/26 handle="k8s-pod-network.8e66bf800741a5ad9dc2c0d190c20c911d1dceecde13855f674db7a4a2e46bc2" host="172-232-4-171" Jan 22 00:35:19.178287 containerd[1629]: 2026-01-22 00:35:19.126 [INFO][3942] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.71.65/26] block=192.168.71.64/26 handle="k8s-pod-network.8e66bf800741a5ad9dc2c0d190c20c911d1dceecde13855f674db7a4a2e46bc2" host="172-232-4-171" Jan 22 00:35:19.178287 containerd[1629]: 2026-01-22 00:35:19.126 [INFO][3942] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.71.65/26] handle="k8s-pod-network.8e66bf800741a5ad9dc2c0d190c20c911d1dceecde13855f674db7a4a2e46bc2" host="172-232-4-171" Jan 22 00:35:19.178287 containerd[1629]: 2026-01-22 00:35:19.126 [INFO][3942] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 22 00:35:19.178287 containerd[1629]: 2026-01-22 00:35:19.126 [INFO][3942] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.71.65/26] IPv6=[] ContainerID="8e66bf800741a5ad9dc2c0d190c20c911d1dceecde13855f674db7a4a2e46bc2" HandleID="k8s-pod-network.8e66bf800741a5ad9dc2c0d190c20c911d1dceecde13855f674db7a4a2e46bc2" Workload="172--232--4--171-k8s-whisker--687c765d98--jfjzf-eth0" Jan 22 00:35:19.178400 containerd[1629]: 2026-01-22 00:35:19.132 [INFO][3931] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8e66bf800741a5ad9dc2c0d190c20c911d1dceecde13855f674db7a4a2e46bc2" Namespace="calico-system" Pod="whisker-687c765d98-jfjzf" WorkloadEndpoint="172--232--4--171-k8s-whisker--687c765d98--jfjzf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--4--171-k8s-whisker--687c765d98--jfjzf-eth0", GenerateName:"whisker-687c765d98-", Namespace:"calico-system", SelfLink:"", UID:"8ecd20e5-2e31-4297-a10a-ea50808543e7", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2026, time.January, 22, 0, 35, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"687c765d98", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-4-171", ContainerID:"", Pod:"whisker-687c765d98-jfjzf", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.71.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calida957aaad48", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 22 00:35:19.178400 containerd[1629]: 2026-01-22 00:35:19.132 [INFO][3931] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.71.65/32] ContainerID="8e66bf800741a5ad9dc2c0d190c20c911d1dceecde13855f674db7a4a2e46bc2" Namespace="calico-system" Pod="whisker-687c765d98-jfjzf" WorkloadEndpoint="172--232--4--171-k8s-whisker--687c765d98--jfjzf-eth0" Jan 22 00:35:19.178470 containerd[1629]: 2026-01-22 00:35:19.132 [INFO][3931] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calida957aaad48 ContainerID="8e66bf800741a5ad9dc2c0d190c20c911d1dceecde13855f674db7a4a2e46bc2" Namespace="calico-system" Pod="whisker-687c765d98-jfjzf" WorkloadEndpoint="172--232--4--171-k8s-whisker--687c765d98--jfjzf-eth0" Jan 22 00:35:19.178470 containerd[1629]: 2026-01-22 00:35:19.151 [INFO][3931] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8e66bf800741a5ad9dc2c0d190c20c911d1dceecde13855f674db7a4a2e46bc2" Namespace="calico-system" Pod="whisker-687c765d98-jfjzf" WorkloadEndpoint="172--232--4--171-k8s-whisker--687c765d98--jfjzf-eth0" Jan 22 00:35:19.183643 containerd[1629]: 2026-01-22 00:35:19.152 [INFO][3931] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8e66bf800741a5ad9dc2c0d190c20c911d1dceecde13855f674db7a4a2e46bc2" Namespace="calico-system" Pod="whisker-687c765d98-jfjzf" WorkloadEndpoint="172--232--4--171-k8s-whisker--687c765d98--jfjzf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--4--171-k8s-whisker--687c765d98--jfjzf-eth0", GenerateName:"whisker-687c765d98-", Namespace:"calico-system", SelfLink:"", UID:"8ecd20e5-2e31-4297-a10a-ea50808543e7", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2026, time.January, 22, 0, 35, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"687c765d98", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-4-171", ContainerID:"8e66bf800741a5ad9dc2c0d190c20c911d1dceecde13855f674db7a4a2e46bc2", Pod:"whisker-687c765d98-jfjzf", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.71.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calida957aaad48", MAC:"8a:40:b4:b7:24:70", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 22 00:35:19.183721 containerd[1629]: 2026-01-22 00:35:19.169 [INFO][3931] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8e66bf800741a5ad9dc2c0d190c20c911d1dceecde13855f674db7a4a2e46bc2" Namespace="calico-system" Pod="whisker-687c765d98-jfjzf" WorkloadEndpoint="172--232--4--171-k8s-whisker--687c765d98--jfjzf-eth0" Jan 22 00:35:19.258799 containerd[1629]: time="2026-01-22T00:35:19.258483827Z" level=info msg="connecting to shim 8e66bf800741a5ad9dc2c0d190c20c911d1dceecde13855f674db7a4a2e46bc2" address="unix:///run/containerd/s/3d3a267e3e3f81d2cb01372512cd28dc0972b72d68f5b75df0baa0afd50ce0f1" namespace=k8s.io protocol=ttrpc version=3 Jan 22 00:35:19.339783 systemd[1]: Started cri-containerd-8e66bf800741a5ad9dc2c0d190c20c911d1dceecde13855f674db7a4a2e46bc2.scope - libcontainer container 8e66bf800741a5ad9dc2c0d190c20c911d1dceecde13855f674db7a4a2e46bc2. Jan 22 00:35:19.396000 audit: BPF prog-id=187 op=LOAD Jan 22 00:35:19.399000 audit: BPF prog-id=188 op=LOAD Jan 22 00:35:19.399000 audit[4026]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=4010 pid=4026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:19.399000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865363662663830303734316135616439646332633064313930633230 Jan 22 00:35:19.399000 audit: BPF prog-id=188 op=UNLOAD Jan 22 00:35:19.399000 audit[4026]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4010 pid=4026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:19.399000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865363662663830303734316135616439646332633064313930633230 Jan 22 00:35:19.399000 audit: BPF prog-id=189 op=LOAD Jan 22 00:35:19.399000 audit[4026]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=4010 pid=4026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:19.399000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865363662663830303734316135616439646332633064313930633230 Jan 22 00:35:19.399000 audit: BPF prog-id=190 op=LOAD Jan 22 00:35:19.399000 audit[4026]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=4010 pid=4026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:19.399000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865363662663830303734316135616439646332633064313930633230 Jan 22 00:35:19.399000 audit: BPF prog-id=190 op=UNLOAD Jan 22 00:35:19.399000 audit[4026]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4010 pid=4026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:19.399000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865363662663830303734316135616439646332633064313930633230 Jan 22 00:35:19.399000 audit: BPF prog-id=189 op=UNLOAD Jan 22 00:35:19.399000 audit[4026]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4010 pid=4026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:19.399000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865363662663830303734316135616439646332633064313930633230 Jan 22 00:35:19.399000 audit: BPF prog-id=191 op=LOAD Jan 22 00:35:19.399000 audit[4026]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=4010 pid=4026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:19.399000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865363662663830303734316135616439646332633064313930633230 Jan 22 00:35:19.426402 kubelet[2805]: I0122 00:35:19.426341 2805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e76e7dc-ca5d-46c4-8bbe-50d71d78158f" path="/var/lib/kubelet/pods/3e76e7dc-ca5d-46c4-8bbe-50d71d78158f/volumes" Jan 22 00:35:19.507359 containerd[1629]: time="2026-01-22T00:35:19.505984021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-687c765d98-jfjzf,Uid:8ecd20e5-2e31-4297-a10a-ea50808543e7,Namespace:calico-system,Attempt:0,} returns sandbox id \"8e66bf800741a5ad9dc2c0d190c20c911d1dceecde13855f674db7a4a2e46bc2\"" Jan 22 00:35:19.512225 containerd[1629]: time="2026-01-22T00:35:19.511835635Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 22 00:35:19.672325 containerd[1629]: time="2026-01-22T00:35:19.672244481Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:35:19.673361 containerd[1629]: time="2026-01-22T00:35:19.673290173Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 22 00:35:19.673709 containerd[1629]: time="2026-01-22T00:35:19.673321226Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 22 00:35:19.673889 kubelet[2805]: E0122 00:35:19.673837 2805 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 22 00:35:19.674282 kubelet[2805]: E0122 00:35:19.673896 2805 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 22 00:35:19.674282 kubelet[2805]: E0122 00:35:19.674002 2805 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-687c765d98-jfjzf_calico-system(8ecd20e5-2e31-4297-a10a-ea50808543e7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 22 00:35:19.676299 containerd[1629]: time="2026-01-22T00:35:19.676211650Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 22 00:35:19.816957 containerd[1629]: time="2026-01-22T00:35:19.815324938Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:35:19.816957 containerd[1629]: time="2026-01-22T00:35:19.816763659Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 22 00:35:19.816957 containerd[1629]: time="2026-01-22T00:35:19.816880140Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 22 00:35:19.817618 kubelet[2805]: E0122 00:35:19.817438 2805 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 22 00:35:19.817618 kubelet[2805]: E0122 00:35:19.817532 2805 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 22 00:35:19.818696 kubelet[2805]: E0122 00:35:19.817654 2805 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-687c765d98-jfjzf_calico-system(8ecd20e5-2e31-4297-a10a-ea50808543e7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 22 00:35:19.818696 kubelet[2805]: E0122 00:35:19.817712 2805 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-687c765d98-jfjzf" podUID="8ecd20e5-2e31-4297-a10a-ea50808543e7" Jan 22 00:35:20.467914 systemd-networkd[1515]: calida957aaad48: Gained IPv6LL Jan 22 00:35:20.583753 kubelet[2805]: E0122 00:35:20.583686 2805 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-687c765d98-jfjzf" podUID="8ecd20e5-2e31-4297-a10a-ea50808543e7" Jan 22 00:35:20.628000 audit[4105]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=4105 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:35:20.628000 audit[4105]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffd20ba2a10 a2=0 a3=7ffd20ba29fc items=0 ppid=2914 pid=4105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:20.628000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:35:20.633000 audit[4105]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=4105 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:35:20.633000 audit[4105]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd20ba2a10 a2=0 a3=0 items=0 ppid=2914 pid=4105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:20.633000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:35:23.426526 containerd[1629]: time="2026-01-22T00:35:23.426439233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bbb7b878c-c279j,Uid:ac5cf267-1601-4ae5-91e1-dc1496ea695f,Namespace:calico-apiserver,Attempt:0,}" Jan 22 00:35:23.545232 systemd-networkd[1515]: caliaa15db37136: Link UP Jan 22 00:35:23.547736 systemd-networkd[1515]: caliaa15db37136: Gained carrier Jan 22 00:35:23.567475 containerd[1629]: 2026-01-22 00:35:23.462 [INFO][4166] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 22 00:35:23.567475 containerd[1629]: 2026-01-22 00:35:23.475 [INFO][4166] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--232--4--171-k8s-calico--apiserver--bbb7b878c--c279j-eth0 calico-apiserver-bbb7b878c- calico-apiserver ac5cf267-1601-4ae5-91e1-dc1496ea695f 822 0 2026-01-22 00:35:00 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:bbb7b878c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-232-4-171 calico-apiserver-bbb7b878c-c279j eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliaa15db37136 [] [] }} ContainerID="1ebfa69a001c93d8bb4ffd9f1d8a5a5556a642b578126dd94af10819ae5adebe" Namespace="calico-apiserver" Pod="calico-apiserver-bbb7b878c-c279j" WorkloadEndpoint="172--232--4--171-k8s-calico--apiserver--bbb7b878c--c279j-" Jan 22 00:35:23.567475 containerd[1629]: 2026-01-22 00:35:23.475 [INFO][4166] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1ebfa69a001c93d8bb4ffd9f1d8a5a5556a642b578126dd94af10819ae5adebe" Namespace="calico-apiserver" Pod="calico-apiserver-bbb7b878c-c279j" WorkloadEndpoint="172--232--4--171-k8s-calico--apiserver--bbb7b878c--c279j-eth0" Jan 22 00:35:23.567475 containerd[1629]: 2026-01-22 00:35:23.501 [INFO][4177] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1ebfa69a001c93d8bb4ffd9f1d8a5a5556a642b578126dd94af10819ae5adebe" HandleID="k8s-pod-network.1ebfa69a001c93d8bb4ffd9f1d8a5a5556a642b578126dd94af10819ae5adebe" Workload="172--232--4--171-k8s-calico--apiserver--bbb7b878c--c279j-eth0" Jan 22 00:35:23.567813 containerd[1629]: 2026-01-22 00:35:23.502 [INFO][4177] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1ebfa69a001c93d8bb4ffd9f1d8a5a5556a642b578126dd94af10819ae5adebe" HandleID="k8s-pod-network.1ebfa69a001c93d8bb4ffd9f1d8a5a5556a642b578126dd94af10819ae5adebe" Workload="172--232--4--171-k8s-calico--apiserver--bbb7b878c--c279j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002bd870), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-232-4-171", "pod":"calico-apiserver-bbb7b878c-c279j", "timestamp":"2026-01-22 00:35:23.501871478 +0000 UTC"}, Hostname:"172-232-4-171", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 22 00:35:23.567813 containerd[1629]: 2026-01-22 00:35:23.502 [INFO][4177] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 22 00:35:23.567813 containerd[1629]: 2026-01-22 00:35:23.502 [INFO][4177] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 22 00:35:23.567813 containerd[1629]: 2026-01-22 00:35:23.502 [INFO][4177] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-232-4-171' Jan 22 00:35:23.567813 containerd[1629]: 2026-01-22 00:35:23.510 [INFO][4177] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1ebfa69a001c93d8bb4ffd9f1d8a5a5556a642b578126dd94af10819ae5adebe" host="172-232-4-171" Jan 22 00:35:23.567813 containerd[1629]: 2026-01-22 00:35:23.515 [INFO][4177] ipam/ipam.go 394: Looking up existing affinities for host host="172-232-4-171" Jan 22 00:35:23.567813 containerd[1629]: 2026-01-22 00:35:23.522 [INFO][4177] ipam/ipam.go 511: Trying affinity for 192.168.71.64/26 host="172-232-4-171" Jan 22 00:35:23.567813 containerd[1629]: 2026-01-22 00:35:23.524 [INFO][4177] ipam/ipam.go 158: Attempting to load block cidr=192.168.71.64/26 host="172-232-4-171" Jan 22 00:35:23.567813 containerd[1629]: 2026-01-22 00:35:23.527 [INFO][4177] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.71.64/26 host="172-232-4-171" Jan 22 00:35:23.567813 containerd[1629]: 2026-01-22 00:35:23.527 [INFO][4177] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.71.64/26 handle="k8s-pod-network.1ebfa69a001c93d8bb4ffd9f1d8a5a5556a642b578126dd94af10819ae5adebe" host="172-232-4-171" Jan 22 00:35:23.568244 containerd[1629]: 2026-01-22 00:35:23.528 [INFO][4177] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1ebfa69a001c93d8bb4ffd9f1d8a5a5556a642b578126dd94af10819ae5adebe Jan 22 00:35:23.568244 containerd[1629]: 2026-01-22 00:35:23.533 [INFO][4177] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.71.64/26 handle="k8s-pod-network.1ebfa69a001c93d8bb4ffd9f1d8a5a5556a642b578126dd94af10819ae5adebe" host="172-232-4-171" Jan 22 00:35:23.568244 containerd[1629]: 2026-01-22 00:35:23.537 [INFO][4177] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.71.66/26] block=192.168.71.64/26 handle="k8s-pod-network.1ebfa69a001c93d8bb4ffd9f1d8a5a5556a642b578126dd94af10819ae5adebe" host="172-232-4-171" Jan 22 00:35:23.568244 containerd[1629]: 2026-01-22 00:35:23.537 [INFO][4177] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.71.66/26] handle="k8s-pod-network.1ebfa69a001c93d8bb4ffd9f1d8a5a5556a642b578126dd94af10819ae5adebe" host="172-232-4-171" Jan 22 00:35:23.568244 containerd[1629]: 2026-01-22 00:35:23.537 [INFO][4177] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 22 00:35:23.568244 containerd[1629]: 2026-01-22 00:35:23.538 [INFO][4177] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.71.66/26] IPv6=[] ContainerID="1ebfa69a001c93d8bb4ffd9f1d8a5a5556a642b578126dd94af10819ae5adebe" HandleID="k8s-pod-network.1ebfa69a001c93d8bb4ffd9f1d8a5a5556a642b578126dd94af10819ae5adebe" Workload="172--232--4--171-k8s-calico--apiserver--bbb7b878c--c279j-eth0" Jan 22 00:35:23.569576 containerd[1629]: 2026-01-22 00:35:23.542 [INFO][4166] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1ebfa69a001c93d8bb4ffd9f1d8a5a5556a642b578126dd94af10819ae5adebe" Namespace="calico-apiserver" Pod="calico-apiserver-bbb7b878c-c279j" WorkloadEndpoint="172--232--4--171-k8s-calico--apiserver--bbb7b878c--c279j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--4--171-k8s-calico--apiserver--bbb7b878c--c279j-eth0", GenerateName:"calico-apiserver-bbb7b878c-", Namespace:"calico-apiserver", SelfLink:"", UID:"ac5cf267-1601-4ae5-91e1-dc1496ea695f", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2026, time.January, 22, 0, 35, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bbb7b878c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-4-171", ContainerID:"", Pod:"calico-apiserver-bbb7b878c-c279j", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.71.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaa15db37136", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 22 00:35:23.569641 containerd[1629]: 2026-01-22 00:35:23.542 [INFO][4166] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.71.66/32] ContainerID="1ebfa69a001c93d8bb4ffd9f1d8a5a5556a642b578126dd94af10819ae5adebe" Namespace="calico-apiserver" Pod="calico-apiserver-bbb7b878c-c279j" WorkloadEndpoint="172--232--4--171-k8s-calico--apiserver--bbb7b878c--c279j-eth0" Jan 22 00:35:23.569641 containerd[1629]: 2026-01-22 00:35:23.542 [INFO][4166] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaa15db37136 ContainerID="1ebfa69a001c93d8bb4ffd9f1d8a5a5556a642b578126dd94af10819ae5adebe" Namespace="calico-apiserver" Pod="calico-apiserver-bbb7b878c-c279j" WorkloadEndpoint="172--232--4--171-k8s-calico--apiserver--bbb7b878c--c279j-eth0" Jan 22 00:35:23.569641 containerd[1629]: 2026-01-22 00:35:23.546 [INFO][4166] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1ebfa69a001c93d8bb4ffd9f1d8a5a5556a642b578126dd94af10819ae5adebe" Namespace="calico-apiserver" Pod="calico-apiserver-bbb7b878c-c279j" WorkloadEndpoint="172--232--4--171-k8s-calico--apiserver--bbb7b878c--c279j-eth0" Jan 22 00:35:23.569719 containerd[1629]: 2026-01-22 00:35:23.547 [INFO][4166] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1ebfa69a001c93d8bb4ffd9f1d8a5a5556a642b578126dd94af10819ae5adebe" Namespace="calico-apiserver" Pod="calico-apiserver-bbb7b878c-c279j" WorkloadEndpoint="172--232--4--171-k8s-calico--apiserver--bbb7b878c--c279j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--4--171-k8s-calico--apiserver--bbb7b878c--c279j-eth0", GenerateName:"calico-apiserver-bbb7b878c-", Namespace:"calico-apiserver", SelfLink:"", UID:"ac5cf267-1601-4ae5-91e1-dc1496ea695f", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2026, time.January, 22, 0, 35, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bbb7b878c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-4-171", ContainerID:"1ebfa69a001c93d8bb4ffd9f1d8a5a5556a642b578126dd94af10819ae5adebe", Pod:"calico-apiserver-bbb7b878c-c279j", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.71.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaa15db37136", MAC:"36:e6:6e:fb:59:65", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 22 00:35:23.569779 containerd[1629]: 2026-01-22 00:35:23.561 [INFO][4166] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1ebfa69a001c93d8bb4ffd9f1d8a5a5556a642b578126dd94af10819ae5adebe" Namespace="calico-apiserver" Pod="calico-apiserver-bbb7b878c-c279j" WorkloadEndpoint="172--232--4--171-k8s-calico--apiserver--bbb7b878c--c279j-eth0" Jan 22 00:35:23.601863 containerd[1629]: time="2026-01-22T00:35:23.601803996Z" level=info msg="connecting to shim 1ebfa69a001c93d8bb4ffd9f1d8a5a5556a642b578126dd94af10819ae5adebe" address="unix:///run/containerd/s/bf673d17d3ff89fb7c24f0ad74f70db624089ba7fa211e79aa95b980f5385acf" namespace=k8s.io protocol=ttrpc version=3 Jan 22 00:35:23.641698 systemd[1]: Started cri-containerd-1ebfa69a001c93d8bb4ffd9f1d8a5a5556a642b578126dd94af10819ae5adebe.scope - libcontainer container 1ebfa69a001c93d8bb4ffd9f1d8a5a5556a642b578126dd94af10819ae5adebe. Jan 22 00:35:23.659000 audit: BPF prog-id=192 op=LOAD Jan 22 00:35:23.662673 kernel: kauditd_printk_skb: 33 callbacks suppressed Jan 22 00:35:23.662772 kernel: audit: type=1334 audit(1769042123.659:601): prog-id=192 op=LOAD Jan 22 00:35:23.666110 kernel: audit: type=1334 audit(1769042123.663:602): prog-id=193 op=LOAD Jan 22 00:35:23.663000 audit: BPF prog-id=193 op=LOAD Jan 22 00:35:23.673644 kernel: audit: type=1300 audit(1769042123.663:602): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=4202 pid=4213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:23.663000 audit[4213]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=4202 pid=4213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:23.677584 kernel: audit: type=1327 audit(1769042123.663:602): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165626661363961303031633933643862623466666439663164386135 Jan 22 00:35:23.663000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165626661363961303031633933643862623466666439663164386135 Jan 22 00:35:23.685586 kernel: audit: type=1334 audit(1769042123.663:603): prog-id=193 op=UNLOAD Jan 22 00:35:23.663000 audit: BPF prog-id=193 op=UNLOAD Jan 22 00:35:23.693969 kernel: audit: type=1300 audit(1769042123.663:603): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4202 pid=4213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:23.663000 audit[4213]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4202 pid=4213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:23.663000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165626661363961303031633933643862623466666439663164386135 Jan 22 00:35:23.703938 kernel: audit: type=1327 audit(1769042123.663:603): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165626661363961303031633933643862623466666439663164386135 Jan 22 00:35:23.663000 audit: BPF prog-id=194 op=LOAD Jan 22 00:35:23.709157 kernel: audit: type=1334 audit(1769042123.663:604): prog-id=194 op=LOAD Jan 22 00:35:23.709286 kernel: audit: type=1300 audit(1769042123.663:604): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=4202 pid=4213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:23.663000 audit[4213]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=4202 pid=4213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:23.663000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165626661363961303031633933643862623466666439663164386135 Jan 22 00:35:23.726692 kernel: audit: type=1327 audit(1769042123.663:604): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165626661363961303031633933643862623466666439663164386135 Jan 22 00:35:23.663000 audit: BPF prog-id=195 op=LOAD Jan 22 00:35:23.663000 audit[4213]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=4202 pid=4213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:23.663000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165626661363961303031633933643862623466666439663164386135 Jan 22 00:35:23.663000 audit: BPF prog-id=195 op=UNLOAD Jan 22 00:35:23.663000 audit[4213]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4202 pid=4213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:23.663000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165626661363961303031633933643862623466666439663164386135 Jan 22 00:35:23.663000 audit: BPF prog-id=194 op=UNLOAD Jan 22 00:35:23.663000 audit[4213]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4202 pid=4213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:23.663000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165626661363961303031633933643862623466666439663164386135 Jan 22 00:35:23.664000 audit: BPF prog-id=196 op=LOAD Jan 22 00:35:23.664000 audit[4213]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=4202 pid=4213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:23.664000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165626661363961303031633933643862623466666439663164386135 Jan 22 00:35:23.744954 containerd[1629]: time="2026-01-22T00:35:23.744836877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bbb7b878c-c279j,Uid:ac5cf267-1601-4ae5-91e1-dc1496ea695f,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"1ebfa69a001c93d8bb4ffd9f1d8a5a5556a642b578126dd94af10819ae5adebe\"" Jan 22 00:35:23.749431 containerd[1629]: time="2026-01-22T00:35:23.748923666Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 22 00:35:23.874843 containerd[1629]: time="2026-01-22T00:35:23.874758078Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:35:23.876611 containerd[1629]: time="2026-01-22T00:35:23.876549990Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 22 00:35:23.876792 containerd[1629]: time="2026-01-22T00:35:23.876564671Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 22 00:35:23.877228 kubelet[2805]: E0122 00:35:23.877109 2805 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 22 00:35:23.877228 kubelet[2805]: E0122 00:35:23.877180 2805 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 22 00:35:23.877983 kubelet[2805]: E0122 00:35:23.877954 2805 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-bbb7b878c-c279j_calico-apiserver(ac5cf267-1601-4ae5-91e1-dc1496ea695f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 22 00:35:23.878099 kubelet[2805]: E0122 00:35:23.878070 2805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-bbb7b878c-c279j" podUID="ac5cf267-1601-4ae5-91e1-dc1496ea695f" Jan 22 00:35:24.425543 containerd[1629]: time="2026-01-22T00:35:24.425399300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-jv75z,Uid:7058f0b0-e750-4a4f-832e-cf58713e25a5,Namespace:calico-system,Attempt:0,}" Jan 22 00:35:24.426549 kubelet[2805]: E0122 00:35:24.426483 2805 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 22 00:35:24.428722 kubelet[2805]: E0122 00:35:24.428006 2805 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 22 00:35:24.428783 containerd[1629]: time="2026-01-22T00:35:24.428478814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-b6ffb,Uid:38af0d26-1bef-4093-8245-e7a246914084,Namespace:kube-system,Attempt:0,}" Jan 22 00:35:24.429402 containerd[1629]: time="2026-01-22T00:35:24.428839684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-v7rsm,Uid:8d4bbd0f-fe7b-41ce-884e-a153734deda0,Namespace:kube-system,Attempt:0,}" Jan 22 00:35:24.609344 systemd-networkd[1515]: cali5355a12d2d8: Link UP Jan 22 00:35:24.611474 kubelet[2805]: E0122 00:35:24.611149 2805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-bbb7b878c-c279j" podUID="ac5cf267-1601-4ae5-91e1-dc1496ea695f" Jan 22 00:35:24.613212 systemd-networkd[1515]: cali5355a12d2d8: Gained carrier Jan 22 00:35:24.641925 containerd[1629]: 2026-01-22 00:35:24.504 [INFO][4260] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 22 00:35:24.641925 containerd[1629]: 2026-01-22 00:35:24.521 [INFO][4260] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--232--4--171-k8s-coredns--66bc5c9577--v7rsm-eth0 coredns-66bc5c9577- kube-system 8d4bbd0f-fe7b-41ce-884e-a153734deda0 825 0 2026-01-22 00:34:51 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-232-4-171 coredns-66bc5c9577-v7rsm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5355a12d2d8 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="a880868f38b0ceeeb0add7633392c9e90cf5f1137c2c5b2744bff4964d7c8308" Namespace="kube-system" Pod="coredns-66bc5c9577-v7rsm" WorkloadEndpoint="172--232--4--171-k8s-coredns--66bc5c9577--v7rsm-" Jan 22 00:35:24.641925 containerd[1629]: 2026-01-22 00:35:24.521 [INFO][4260] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a880868f38b0ceeeb0add7633392c9e90cf5f1137c2c5b2744bff4964d7c8308" Namespace="kube-system" Pod="coredns-66bc5c9577-v7rsm" WorkloadEndpoint="172--232--4--171-k8s-coredns--66bc5c9577--v7rsm-eth0" Jan 22 00:35:24.641925 containerd[1629]: 2026-01-22 00:35:24.551 [INFO][4303] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a880868f38b0ceeeb0add7633392c9e90cf5f1137c2c5b2744bff4964d7c8308" HandleID="k8s-pod-network.a880868f38b0ceeeb0add7633392c9e90cf5f1137c2c5b2744bff4964d7c8308" Workload="172--232--4--171-k8s-coredns--66bc5c9577--v7rsm-eth0" Jan 22 00:35:24.642233 containerd[1629]: 2026-01-22 00:35:24.552 [INFO][4303] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a880868f38b0ceeeb0add7633392c9e90cf5f1137c2c5b2744bff4964d7c8308" HandleID="k8s-pod-network.a880868f38b0ceeeb0add7633392c9e90cf5f1137c2c5b2744bff4964d7c8308" Workload="172--232--4--171-k8s-coredns--66bc5c9577--v7rsm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d55e0), Attrs:map[string]string{"namespace":"kube-system", "node":"172-232-4-171", "pod":"coredns-66bc5c9577-v7rsm", "timestamp":"2026-01-22 00:35:24.551953093 +0000 UTC"}, Hostname:"172-232-4-171", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 22 00:35:24.642233 containerd[1629]: 2026-01-22 00:35:24.552 [INFO][4303] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 22 00:35:24.642233 containerd[1629]: 2026-01-22 00:35:24.552 [INFO][4303] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 22 00:35:24.642233 containerd[1629]: 2026-01-22 00:35:24.552 [INFO][4303] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-232-4-171' Jan 22 00:35:24.642233 containerd[1629]: 2026-01-22 00:35:24.562 [INFO][4303] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a880868f38b0ceeeb0add7633392c9e90cf5f1137c2c5b2744bff4964d7c8308" host="172-232-4-171" Jan 22 00:35:24.642233 containerd[1629]: 2026-01-22 00:35:24.567 [INFO][4303] ipam/ipam.go 394: Looking up existing affinities for host host="172-232-4-171" Jan 22 00:35:24.642233 containerd[1629]: 2026-01-22 00:35:24.572 [INFO][4303] ipam/ipam.go 511: Trying affinity for 192.168.71.64/26 host="172-232-4-171" Jan 22 00:35:24.642233 containerd[1629]: 2026-01-22 00:35:24.574 [INFO][4303] ipam/ipam.go 158: Attempting to load block cidr=192.168.71.64/26 host="172-232-4-171" Jan 22 00:35:24.642233 containerd[1629]: 2026-01-22 00:35:24.576 [INFO][4303] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.71.64/26 host="172-232-4-171" Jan 22 00:35:24.642233 containerd[1629]: 2026-01-22 00:35:24.576 [INFO][4303] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.71.64/26 handle="k8s-pod-network.a880868f38b0ceeeb0add7633392c9e90cf5f1137c2c5b2744bff4964d7c8308" host="172-232-4-171" Jan 22 00:35:24.643440 containerd[1629]: 2026-01-22 00:35:24.577 [INFO][4303] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a880868f38b0ceeeb0add7633392c9e90cf5f1137c2c5b2744bff4964d7c8308 Jan 22 00:35:24.643440 containerd[1629]: 2026-01-22 00:35:24.581 [INFO][4303] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.71.64/26 handle="k8s-pod-network.a880868f38b0ceeeb0add7633392c9e90cf5f1137c2c5b2744bff4964d7c8308" host="172-232-4-171" Jan 22 00:35:24.643440 containerd[1629]: 2026-01-22 00:35:24.586 [INFO][4303] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.71.67/26] block=192.168.71.64/26 handle="k8s-pod-network.a880868f38b0ceeeb0add7633392c9e90cf5f1137c2c5b2744bff4964d7c8308" host="172-232-4-171" Jan 22 00:35:24.643440 containerd[1629]: 2026-01-22 00:35:24.586 [INFO][4303] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.71.67/26] handle="k8s-pod-network.a880868f38b0ceeeb0add7633392c9e90cf5f1137c2c5b2744bff4964d7c8308" host="172-232-4-171" Jan 22 00:35:24.643440 containerd[1629]: 2026-01-22 00:35:24.586 [INFO][4303] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 22 00:35:24.643440 containerd[1629]: 2026-01-22 00:35:24.586 [INFO][4303] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.71.67/26] IPv6=[] ContainerID="a880868f38b0ceeeb0add7633392c9e90cf5f1137c2c5b2744bff4964d7c8308" HandleID="k8s-pod-network.a880868f38b0ceeeb0add7633392c9e90cf5f1137c2c5b2744bff4964d7c8308" Workload="172--232--4--171-k8s-coredns--66bc5c9577--v7rsm-eth0" Jan 22 00:35:24.643688 containerd[1629]: 2026-01-22 00:35:24.591 [INFO][4260] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a880868f38b0ceeeb0add7633392c9e90cf5f1137c2c5b2744bff4964d7c8308" Namespace="kube-system" Pod="coredns-66bc5c9577-v7rsm" WorkloadEndpoint="172--232--4--171-k8s-coredns--66bc5c9577--v7rsm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--4--171-k8s-coredns--66bc5c9577--v7rsm-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"8d4bbd0f-fe7b-41ce-884e-a153734deda0", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2026, time.January, 22, 0, 34, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-4-171", ContainerID:"", Pod:"coredns-66bc5c9577-v7rsm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.71.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5355a12d2d8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 22 00:35:24.643688 containerd[1629]: 2026-01-22 00:35:24.591 [INFO][4260] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.71.67/32] ContainerID="a880868f38b0ceeeb0add7633392c9e90cf5f1137c2c5b2744bff4964d7c8308" Namespace="kube-system" Pod="coredns-66bc5c9577-v7rsm" WorkloadEndpoint="172--232--4--171-k8s-coredns--66bc5c9577--v7rsm-eth0" Jan 22 00:35:24.643688 containerd[1629]: 2026-01-22 00:35:24.591 [INFO][4260] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5355a12d2d8 ContainerID="a880868f38b0ceeeb0add7633392c9e90cf5f1137c2c5b2744bff4964d7c8308" Namespace="kube-system" Pod="coredns-66bc5c9577-v7rsm" WorkloadEndpoint="172--232--4--171-k8s-coredns--66bc5c9577--v7rsm-eth0" Jan 22 00:35:24.643688 containerd[1629]: 2026-01-22 00:35:24.614 [INFO][4260] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a880868f38b0ceeeb0add7633392c9e90cf5f1137c2c5b2744bff4964d7c8308" Namespace="kube-system" Pod="coredns-66bc5c9577-v7rsm" WorkloadEndpoint="172--232--4--171-k8s-coredns--66bc5c9577--v7rsm-eth0" Jan 22 00:35:24.643688 containerd[1629]: 2026-01-22 00:35:24.619 [INFO][4260] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a880868f38b0ceeeb0add7633392c9e90cf5f1137c2c5b2744bff4964d7c8308" Namespace="kube-system" Pod="coredns-66bc5c9577-v7rsm" WorkloadEndpoint="172--232--4--171-k8s-coredns--66bc5c9577--v7rsm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--4--171-k8s-coredns--66bc5c9577--v7rsm-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"8d4bbd0f-fe7b-41ce-884e-a153734deda0", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2026, time.January, 22, 0, 34, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-4-171", ContainerID:"a880868f38b0ceeeb0add7633392c9e90cf5f1137c2c5b2744bff4964d7c8308", Pod:"coredns-66bc5c9577-v7rsm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.71.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5355a12d2d8", MAC:"c6:f8:60:da:1f:25", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 22 00:35:24.643688 containerd[1629]: 2026-01-22 00:35:24.632 [INFO][4260] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a880868f38b0ceeeb0add7633392c9e90cf5f1137c2c5b2744bff4964d7c8308" Namespace="kube-system" Pod="coredns-66bc5c9577-v7rsm" WorkloadEndpoint="172--232--4--171-k8s-coredns--66bc5c9577--v7rsm-eth0" Jan 22 00:35:24.655000 audit[4323]: NETFILTER_CFG table=filter:119 family=2 entries=22 op=nft_register_rule pid=4323 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:35:24.655000 audit[4323]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffed85f15f0 a2=0 a3=7ffed85f15dc items=0 ppid=2914 pid=4323 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:24.655000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:35:24.657000 audit[4323]: NETFILTER_CFG table=nat:120 family=2 entries=12 op=nft_register_rule pid=4323 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:35:24.657000 audit[4323]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffed85f15f0 a2=0 a3=0 items=0 ppid=2914 pid=4323 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:24.657000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:35:24.698891 containerd[1629]: time="2026-01-22T00:35:24.698851671Z" level=info msg="connecting to shim a880868f38b0ceeeb0add7633392c9e90cf5f1137c2c5b2744bff4964d7c8308" address="unix:///run/containerd/s/1830ce13d0c77f25d6bb9345512c7df3eff3082d789f585460a6266533126661" namespace=k8s.io protocol=ttrpc version=3 Jan 22 00:35:24.750807 systemd[1]: Started cri-containerd-a880868f38b0ceeeb0add7633392c9e90cf5f1137c2c5b2744bff4964d7c8308.scope - libcontainer container a880868f38b0ceeeb0add7633392c9e90cf5f1137c2c5b2744bff4964d7c8308. Jan 22 00:35:24.755234 systemd-networkd[1515]: calic99323f2cb7: Link UP Jan 22 00:35:24.757654 systemd-networkd[1515]: calic99323f2cb7: Gained carrier Jan 22 00:35:24.785000 audit: BPF prog-id=197 op=LOAD Jan 22 00:35:24.786587 containerd[1629]: 2026-01-22 00:35:24.499 [INFO][4261] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 22 00:35:24.786587 containerd[1629]: 2026-01-22 00:35:24.515 [INFO][4261] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--232--4--171-k8s-goldmane--7c778bb748--jv75z-eth0 goldmane-7c778bb748- calico-system 7058f0b0-e750-4a4f-832e-cf58713e25a5 821 0 2026-01-22 00:35:02 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s 172-232-4-171 goldmane-7c778bb748-jv75z eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calic99323f2cb7 [] [] }} ContainerID="5a26531a8b30dffcedcf203aace05a292dc64684f836f5532e2dc1b47633ba70" Namespace="calico-system" Pod="goldmane-7c778bb748-jv75z" WorkloadEndpoint="172--232--4--171-k8s-goldmane--7c778bb748--jv75z-" Jan 22 00:35:24.786587 containerd[1629]: 2026-01-22 00:35:24.515 [INFO][4261] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5a26531a8b30dffcedcf203aace05a292dc64684f836f5532e2dc1b47633ba70" Namespace="calico-system" Pod="goldmane-7c778bb748-jv75z" WorkloadEndpoint="172--232--4--171-k8s-goldmane--7c778bb748--jv75z-eth0" Jan 22 00:35:24.786587 containerd[1629]: 2026-01-22 00:35:24.572 [INFO][4296] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5a26531a8b30dffcedcf203aace05a292dc64684f836f5532e2dc1b47633ba70" HandleID="k8s-pod-network.5a26531a8b30dffcedcf203aace05a292dc64684f836f5532e2dc1b47633ba70" Workload="172--232--4--171-k8s-goldmane--7c778bb748--jv75z-eth0" Jan 22 00:35:24.786587 containerd[1629]: 2026-01-22 00:35:24.572 [INFO][4296] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5a26531a8b30dffcedcf203aace05a292dc64684f836f5532e2dc1b47633ba70" HandleID="k8s-pod-network.5a26531a8b30dffcedcf203aace05a292dc64684f836f5532e2dc1b47633ba70" Workload="172--232--4--171-k8s-goldmane--7c778bb748--jv75z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-232-4-171", "pod":"goldmane-7c778bb748-jv75z", "timestamp":"2026-01-22 00:35:24.572184038 +0000 UTC"}, Hostname:"172-232-4-171", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 22 00:35:24.786587 containerd[1629]: 2026-01-22 00:35:24.572 [INFO][4296] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 22 00:35:24.786587 containerd[1629]: 2026-01-22 00:35:24.586 [INFO][4296] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 22 00:35:24.786587 containerd[1629]: 2026-01-22 00:35:24.587 [INFO][4296] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-232-4-171' Jan 22 00:35:24.786587 containerd[1629]: 2026-01-22 00:35:24.663 [INFO][4296] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5a26531a8b30dffcedcf203aace05a292dc64684f836f5532e2dc1b47633ba70" host="172-232-4-171" Jan 22 00:35:24.786587 containerd[1629]: 2026-01-22 00:35:24.680 [INFO][4296] ipam/ipam.go 394: Looking up existing affinities for host host="172-232-4-171" Jan 22 00:35:24.786587 containerd[1629]: 2026-01-22 00:35:24.690 [INFO][4296] ipam/ipam.go 511: Trying affinity for 192.168.71.64/26 host="172-232-4-171" Jan 22 00:35:24.786587 containerd[1629]: 2026-01-22 00:35:24.694 [INFO][4296] ipam/ipam.go 158: Attempting to load block cidr=192.168.71.64/26 host="172-232-4-171" Jan 22 00:35:24.786587 containerd[1629]: 2026-01-22 00:35:24.701 [INFO][4296] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.71.64/26 host="172-232-4-171" Jan 22 00:35:24.786587 containerd[1629]: 2026-01-22 00:35:24.701 [INFO][4296] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.71.64/26 handle="k8s-pod-network.5a26531a8b30dffcedcf203aace05a292dc64684f836f5532e2dc1b47633ba70" host="172-232-4-171" Jan 22 00:35:24.786587 containerd[1629]: 2026-01-22 00:35:24.703 [INFO][4296] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5a26531a8b30dffcedcf203aace05a292dc64684f836f5532e2dc1b47633ba70 Jan 22 00:35:24.786587 containerd[1629]: 2026-01-22 00:35:24.713 [INFO][4296] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.71.64/26 handle="k8s-pod-network.5a26531a8b30dffcedcf203aace05a292dc64684f836f5532e2dc1b47633ba70" host="172-232-4-171" Jan 22 00:35:24.786587 containerd[1629]: 2026-01-22 00:35:24.729 [INFO][4296] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.71.68/26] block=192.168.71.64/26 handle="k8s-pod-network.5a26531a8b30dffcedcf203aace05a292dc64684f836f5532e2dc1b47633ba70" host="172-232-4-171" Jan 22 00:35:24.786587 containerd[1629]: 2026-01-22 00:35:24.729 [INFO][4296] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.71.68/26] handle="k8s-pod-network.5a26531a8b30dffcedcf203aace05a292dc64684f836f5532e2dc1b47633ba70" host="172-232-4-171" Jan 22 00:35:24.786587 containerd[1629]: 2026-01-22 00:35:24.729 [INFO][4296] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 22 00:35:24.786587 containerd[1629]: 2026-01-22 00:35:24.729 [INFO][4296] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.71.68/26] IPv6=[] ContainerID="5a26531a8b30dffcedcf203aace05a292dc64684f836f5532e2dc1b47633ba70" HandleID="k8s-pod-network.5a26531a8b30dffcedcf203aace05a292dc64684f836f5532e2dc1b47633ba70" Workload="172--232--4--171-k8s-goldmane--7c778bb748--jv75z-eth0" Jan 22 00:35:24.787174 containerd[1629]: 2026-01-22 00:35:24.740 [INFO][4261] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5a26531a8b30dffcedcf203aace05a292dc64684f836f5532e2dc1b47633ba70" Namespace="calico-system" Pod="goldmane-7c778bb748-jv75z" WorkloadEndpoint="172--232--4--171-k8s-goldmane--7c778bb748--jv75z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--4--171-k8s-goldmane--7c778bb748--jv75z-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"7058f0b0-e750-4a4f-832e-cf58713e25a5", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2026, time.January, 22, 0, 35, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-4-171", ContainerID:"", Pod:"goldmane-7c778bb748-jv75z", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.71.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic99323f2cb7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 22 00:35:24.787174 containerd[1629]: 2026-01-22 00:35:24.741 [INFO][4261] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.71.68/32] ContainerID="5a26531a8b30dffcedcf203aace05a292dc64684f836f5532e2dc1b47633ba70" Namespace="calico-system" Pod="goldmane-7c778bb748-jv75z" WorkloadEndpoint="172--232--4--171-k8s-goldmane--7c778bb748--jv75z-eth0" Jan 22 00:35:24.787174 containerd[1629]: 2026-01-22 00:35:24.741 [INFO][4261] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic99323f2cb7 ContainerID="5a26531a8b30dffcedcf203aace05a292dc64684f836f5532e2dc1b47633ba70" Namespace="calico-system" Pod="goldmane-7c778bb748-jv75z" WorkloadEndpoint="172--232--4--171-k8s-goldmane--7c778bb748--jv75z-eth0" Jan 22 00:35:24.787174 containerd[1629]: 2026-01-22 00:35:24.762 [INFO][4261] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5a26531a8b30dffcedcf203aace05a292dc64684f836f5532e2dc1b47633ba70" Namespace="calico-system" Pod="goldmane-7c778bb748-jv75z" WorkloadEndpoint="172--232--4--171-k8s-goldmane--7c778bb748--jv75z-eth0" Jan 22 00:35:24.787174 containerd[1629]: 2026-01-22 00:35:24.763 [INFO][4261] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5a26531a8b30dffcedcf203aace05a292dc64684f836f5532e2dc1b47633ba70" Namespace="calico-system" Pod="goldmane-7c778bb748-jv75z" WorkloadEndpoint="172--232--4--171-k8s-goldmane--7c778bb748--jv75z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--4--171-k8s-goldmane--7c778bb748--jv75z-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"7058f0b0-e750-4a4f-832e-cf58713e25a5", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2026, time.January, 22, 0, 35, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-4-171", ContainerID:"5a26531a8b30dffcedcf203aace05a292dc64684f836f5532e2dc1b47633ba70", Pod:"goldmane-7c778bb748-jv75z", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.71.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic99323f2cb7", MAC:"62:31:96:96:4e:fe", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 22 00:35:24.787174 containerd[1629]: 2026-01-22 00:35:24.779 [INFO][4261] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5a26531a8b30dffcedcf203aace05a292dc64684f836f5532e2dc1b47633ba70" Namespace="calico-system" Pod="goldmane-7c778bb748-jv75z" WorkloadEndpoint="172--232--4--171-k8s-goldmane--7c778bb748--jv75z-eth0" Jan 22 00:35:24.786000 audit: BPF prog-id=198 op=LOAD Jan 22 00:35:24.786000 audit[4344]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4332 pid=4344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:24.786000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138383038363866333862306365656562306164643736333333393263 Jan 22 00:35:24.786000 audit: BPF prog-id=198 op=UNLOAD Jan 22 00:35:24.786000 audit[4344]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4332 pid=4344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:24.786000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138383038363866333862306365656562306164643736333333393263 Jan 22 00:35:24.787000 audit: BPF prog-id=199 op=LOAD Jan 22 00:35:24.787000 audit[4344]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4332 pid=4344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:24.787000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138383038363866333862306365656562306164643736333333393263 Jan 22 00:35:24.787000 audit: BPF prog-id=200 op=LOAD Jan 22 00:35:24.787000 audit[4344]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4332 pid=4344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:24.787000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138383038363866333862306365656562306164643736333333393263 Jan 22 00:35:24.787000 audit: BPF prog-id=200 op=UNLOAD Jan 22 00:35:24.787000 audit[4344]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4332 pid=4344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:24.787000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138383038363866333862306365656562306164643736333333393263 Jan 22 00:35:24.787000 audit: BPF prog-id=199 op=UNLOAD Jan 22 00:35:24.787000 audit[4344]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4332 pid=4344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:24.787000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138383038363866333862306365656562306164643736333333393263 Jan 22 00:35:24.787000 audit: BPF prog-id=201 op=LOAD Jan 22 00:35:24.787000 audit[4344]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4332 pid=4344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:24.787000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138383038363866333862306365656562306164643736333333393263 Jan 22 00:35:24.824361 containerd[1629]: time="2026-01-22T00:35:24.824323187Z" level=info msg="connecting to shim 5a26531a8b30dffcedcf203aace05a292dc64684f836f5532e2dc1b47633ba70" address="unix:///run/containerd/s/94095d8f8cbd3ce629f763e37debf412f16ad15f4754536ab5aace403cb898d8" namespace=k8s.io protocol=ttrpc version=3 Jan 22 00:35:24.831156 systemd-networkd[1515]: cali9c08672e1b3: Link UP Jan 22 00:35:24.832966 systemd-networkd[1515]: cali9c08672e1b3: Gained carrier Jan 22 00:35:24.861334 containerd[1629]: 2026-01-22 00:35:24.496 [INFO][4272] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 22 00:35:24.861334 containerd[1629]: 2026-01-22 00:35:24.516 [INFO][4272] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--232--4--171-k8s-coredns--66bc5c9577--b6ffb-eth0 coredns-66bc5c9577- kube-system 38af0d26-1bef-4093-8245-e7a246914084 816 0 2026-01-22 00:34:51 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-232-4-171 coredns-66bc5c9577-b6ffb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9c08672e1b3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="711fe13316e115d4a3ac2c44bd09eb1d9cda753fb2ac82bdb74927d1c7e3e739" Namespace="kube-system" Pod="coredns-66bc5c9577-b6ffb" WorkloadEndpoint="172--232--4--171-k8s-coredns--66bc5c9577--b6ffb-" Jan 22 00:35:24.861334 containerd[1629]: 2026-01-22 00:35:24.516 [INFO][4272] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="711fe13316e115d4a3ac2c44bd09eb1d9cda753fb2ac82bdb74927d1c7e3e739" Namespace="kube-system" Pod="coredns-66bc5c9577-b6ffb" WorkloadEndpoint="172--232--4--171-k8s-coredns--66bc5c9577--b6ffb-eth0" Jan 22 00:35:24.861334 containerd[1629]: 2026-01-22 00:35:24.571 [INFO][4298] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="711fe13316e115d4a3ac2c44bd09eb1d9cda753fb2ac82bdb74927d1c7e3e739" HandleID="k8s-pod-network.711fe13316e115d4a3ac2c44bd09eb1d9cda753fb2ac82bdb74927d1c7e3e739" Workload="172--232--4--171-k8s-coredns--66bc5c9577--b6ffb-eth0" Jan 22 00:35:24.861334 containerd[1629]: 2026-01-22 00:35:24.572 [INFO][4298] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="711fe13316e115d4a3ac2c44bd09eb1d9cda753fb2ac82bdb74927d1c7e3e739" HandleID="k8s-pod-network.711fe13316e115d4a3ac2c44bd09eb1d9cda753fb2ac82bdb74927d1c7e3e739" Workload="172--232--4--171-k8s-coredns--66bc5c9577--b6ffb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad370), Attrs:map[string]string{"namespace":"kube-system", "node":"172-232-4-171", "pod":"coredns-66bc5c9577-b6ffb", "timestamp":"2026-01-22 00:35:24.571421375 +0000 UTC"}, Hostname:"172-232-4-171", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 22 00:35:24.861334 containerd[1629]: 2026-01-22 00:35:24.572 [INFO][4298] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 22 00:35:24.861334 containerd[1629]: 2026-01-22 00:35:24.729 [INFO][4298] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 22 00:35:24.861334 containerd[1629]: 2026-01-22 00:35:24.729 [INFO][4298] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-232-4-171' Jan 22 00:35:24.861334 containerd[1629]: 2026-01-22 00:35:24.767 [INFO][4298] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.711fe13316e115d4a3ac2c44bd09eb1d9cda753fb2ac82bdb74927d1c7e3e739" host="172-232-4-171" Jan 22 00:35:24.861334 containerd[1629]: 2026-01-22 00:35:24.784 [INFO][4298] ipam/ipam.go 394: Looking up existing affinities for host host="172-232-4-171" Jan 22 00:35:24.861334 containerd[1629]: 2026-01-22 00:35:24.798 [INFO][4298] ipam/ipam.go 511: Trying affinity for 192.168.71.64/26 host="172-232-4-171" Jan 22 00:35:24.861334 containerd[1629]: 2026-01-22 00:35:24.800 [INFO][4298] ipam/ipam.go 158: Attempting to load block cidr=192.168.71.64/26 host="172-232-4-171" Jan 22 00:35:24.861334 containerd[1629]: 2026-01-22 00:35:24.804 [INFO][4298] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.71.64/26 host="172-232-4-171" Jan 22 00:35:24.861334 containerd[1629]: 2026-01-22 00:35:24.804 [INFO][4298] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.71.64/26 handle="k8s-pod-network.711fe13316e115d4a3ac2c44bd09eb1d9cda753fb2ac82bdb74927d1c7e3e739" host="172-232-4-171" Jan 22 00:35:24.861334 containerd[1629]: 2026-01-22 00:35:24.807 [INFO][4298] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.711fe13316e115d4a3ac2c44bd09eb1d9cda753fb2ac82bdb74927d1c7e3e739 Jan 22 00:35:24.861334 containerd[1629]: 2026-01-22 00:35:24.813 [INFO][4298] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.71.64/26 handle="k8s-pod-network.711fe13316e115d4a3ac2c44bd09eb1d9cda753fb2ac82bdb74927d1c7e3e739" host="172-232-4-171" Jan 22 00:35:24.861334 containerd[1629]: 2026-01-22 00:35:24.820 [INFO][4298] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.71.69/26] block=192.168.71.64/26 handle="k8s-pod-network.711fe13316e115d4a3ac2c44bd09eb1d9cda753fb2ac82bdb74927d1c7e3e739" host="172-232-4-171" Jan 22 00:35:24.861334 containerd[1629]: 2026-01-22 00:35:24.821 [INFO][4298] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.71.69/26] handle="k8s-pod-network.711fe13316e115d4a3ac2c44bd09eb1d9cda753fb2ac82bdb74927d1c7e3e739" host="172-232-4-171" Jan 22 00:35:24.861334 containerd[1629]: 2026-01-22 00:35:24.821 [INFO][4298] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 22 00:35:24.861334 containerd[1629]: 2026-01-22 00:35:24.822 [INFO][4298] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.71.69/26] IPv6=[] ContainerID="711fe13316e115d4a3ac2c44bd09eb1d9cda753fb2ac82bdb74927d1c7e3e739" HandleID="k8s-pod-network.711fe13316e115d4a3ac2c44bd09eb1d9cda753fb2ac82bdb74927d1c7e3e739" Workload="172--232--4--171-k8s-coredns--66bc5c9577--b6ffb-eth0" Jan 22 00:35:24.861824 containerd[1629]: 2026-01-22 00:35:24.827 [INFO][4272] cni-plugin/k8s.go 418: Populated endpoint ContainerID="711fe13316e115d4a3ac2c44bd09eb1d9cda753fb2ac82bdb74927d1c7e3e739" Namespace="kube-system" Pod="coredns-66bc5c9577-b6ffb" WorkloadEndpoint="172--232--4--171-k8s-coredns--66bc5c9577--b6ffb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--4--171-k8s-coredns--66bc5c9577--b6ffb-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"38af0d26-1bef-4093-8245-e7a246914084", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2026, time.January, 22, 0, 34, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-4-171", ContainerID:"", Pod:"coredns-66bc5c9577-b6ffb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.71.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9c08672e1b3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 22 00:35:24.861824 containerd[1629]: 2026-01-22 00:35:24.827 [INFO][4272] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.71.69/32] ContainerID="711fe13316e115d4a3ac2c44bd09eb1d9cda753fb2ac82bdb74927d1c7e3e739" Namespace="kube-system" Pod="coredns-66bc5c9577-b6ffb" WorkloadEndpoint="172--232--4--171-k8s-coredns--66bc5c9577--b6ffb-eth0" Jan 22 00:35:24.861824 containerd[1629]: 2026-01-22 00:35:24.827 [INFO][4272] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9c08672e1b3 ContainerID="711fe13316e115d4a3ac2c44bd09eb1d9cda753fb2ac82bdb74927d1c7e3e739" Namespace="kube-system" Pod="coredns-66bc5c9577-b6ffb" WorkloadEndpoint="172--232--4--171-k8s-coredns--66bc5c9577--b6ffb-eth0" Jan 22 00:35:24.861824 containerd[1629]: 2026-01-22 00:35:24.834 [INFO][4272] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="711fe13316e115d4a3ac2c44bd09eb1d9cda753fb2ac82bdb74927d1c7e3e739" Namespace="kube-system" Pod="coredns-66bc5c9577-b6ffb" WorkloadEndpoint="172--232--4--171-k8s-coredns--66bc5c9577--b6ffb-eth0" Jan 22 00:35:24.861824 containerd[1629]: 2026-01-22 00:35:24.834 [INFO][4272] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="711fe13316e115d4a3ac2c44bd09eb1d9cda753fb2ac82bdb74927d1c7e3e739" Namespace="kube-system" Pod="coredns-66bc5c9577-b6ffb" WorkloadEndpoint="172--232--4--171-k8s-coredns--66bc5c9577--b6ffb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--4--171-k8s-coredns--66bc5c9577--b6ffb-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"38af0d26-1bef-4093-8245-e7a246914084", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2026, time.January, 22, 0, 34, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-4-171", ContainerID:"711fe13316e115d4a3ac2c44bd09eb1d9cda753fb2ac82bdb74927d1c7e3e739", Pod:"coredns-66bc5c9577-b6ffb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.71.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9c08672e1b3", MAC:"5e:3f:8a:29:a4:15", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 22 00:35:24.861824 containerd[1629]: 2026-01-22 00:35:24.849 [INFO][4272] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="711fe13316e115d4a3ac2c44bd09eb1d9cda753fb2ac82bdb74927d1c7e3e739" Namespace="kube-system" Pod="coredns-66bc5c9577-b6ffb" WorkloadEndpoint="172--232--4--171-k8s-coredns--66bc5c9577--b6ffb-eth0" Jan 22 00:35:24.874133 containerd[1629]: time="2026-01-22T00:35:24.874010109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-v7rsm,Uid:8d4bbd0f-fe7b-41ce-884e-a153734deda0,Namespace:kube-system,Attempt:0,} returns sandbox id \"a880868f38b0ceeeb0add7633392c9e90cf5f1137c2c5b2744bff4964d7c8308\"" Jan 22 00:35:24.875960 kubelet[2805]: E0122 00:35:24.875939 2805 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 22 00:35:24.884168 systemd-networkd[1515]: caliaa15db37136: Gained IPv6LL Jan 22 00:35:24.888609 containerd[1629]: time="2026-01-22T00:35:24.888091475Z" level=info msg="CreateContainer within sandbox \"a880868f38b0ceeeb0add7633392c9e90cf5f1137c2c5b2744bff4964d7c8308\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 22 00:35:24.899700 systemd[1]: Started cri-containerd-5a26531a8b30dffcedcf203aace05a292dc64684f836f5532e2dc1b47633ba70.scope - libcontainer container 5a26531a8b30dffcedcf203aace05a292dc64684f836f5532e2dc1b47633ba70. Jan 22 00:35:24.907838 containerd[1629]: time="2026-01-22T00:35:24.907379871Z" level=info msg="Container 3744971429bcc4d3187a9c32ee0f596986cbbe67dfda3abbdfe856930f603ff5: CDI devices from CRI Config.CDIDevices: []" Jan 22 00:35:24.916659 containerd[1629]: time="2026-01-22T00:35:24.916637807Z" level=info msg="CreateContainer within sandbox \"a880868f38b0ceeeb0add7633392c9e90cf5f1137c2c5b2744bff4964d7c8308\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3744971429bcc4d3187a9c32ee0f596986cbbe67dfda3abbdfe856930f603ff5\"" Jan 22 00:35:24.917920 containerd[1629]: time="2026-01-22T00:35:24.917848838Z" level=info msg="StartContainer for \"3744971429bcc4d3187a9c32ee0f596986cbbe67dfda3abbdfe856930f603ff5\"" Jan 22 00:35:24.923001 containerd[1629]: time="2026-01-22T00:35:24.922805788Z" level=info msg="connecting to shim 3744971429bcc4d3187a9c32ee0f596986cbbe67dfda3abbdfe856930f603ff5" address="unix:///run/containerd/s/1830ce13d0c77f25d6bb9345512c7df3eff3082d789f585460a6266533126661" protocol=ttrpc version=3 Jan 22 00:35:24.937000 audit: BPF prog-id=202 op=LOAD Jan 22 00:35:24.939000 audit: BPF prog-id=203 op=LOAD Jan 22 00:35:24.939000 audit[4392]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=4378 pid=4392 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:24.939000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561323635333161386233306466666365646366323033616163653035 Jan 22 00:35:24.940000 audit: BPF prog-id=203 op=UNLOAD Jan 22 00:35:24.940000 audit[4392]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4378 pid=4392 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:24.940000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561323635333161386233306466666365646366323033616163653035 Jan 22 00:35:24.942000 audit: BPF prog-id=204 op=LOAD Jan 22 00:35:24.942000 audit[4392]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4378 pid=4392 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:24.942000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561323635333161386233306466666365646366323033616163653035 Jan 22 00:35:24.942000 audit: BPF prog-id=205 op=LOAD Jan 22 00:35:24.942000 audit[4392]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4378 pid=4392 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:24.942000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561323635333161386233306466666365646366323033616163653035 Jan 22 00:35:24.944000 audit: BPF prog-id=205 op=UNLOAD Jan 22 00:35:24.944000 audit[4392]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4378 pid=4392 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:24.944000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561323635333161386233306466666365646366323033616163653035 Jan 22 00:35:24.944000 audit: BPF prog-id=204 op=UNLOAD Jan 22 00:35:24.944000 audit[4392]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4378 pid=4392 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:24.944000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561323635333161386233306466666365646366323033616163653035 Jan 22 00:35:24.946000 audit: BPF prog-id=206 op=LOAD Jan 22 00:35:24.946000 audit[4392]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=4378 pid=4392 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:24.946000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561323635333161386233306466666365646366323033616163653035 Jan 22 00:35:24.960975 systemd[1]: Started cri-containerd-3744971429bcc4d3187a9c32ee0f596986cbbe67dfda3abbdfe856930f603ff5.scope - libcontainer container 3744971429bcc4d3187a9c32ee0f596986cbbe67dfda3abbdfe856930f603ff5. Jan 22 00:35:24.978906 containerd[1629]: time="2026-01-22T00:35:24.978852576Z" level=info msg="connecting to shim 711fe13316e115d4a3ac2c44bd09eb1d9cda753fb2ac82bdb74927d1c7e3e739" address="unix:///run/containerd/s/f2f48712019008cb08adc80a73fca94a98b7bc17e7649d2a59f99e4c35286088" namespace=k8s.io protocol=ttrpc version=3 Jan 22 00:35:24.989000 audit: BPF prog-id=207 op=LOAD Jan 22 00:35:24.991000 audit: BPF prog-id=208 op=LOAD Jan 22 00:35:24.991000 audit[4426]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=4332 pid=4426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:24.991000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337343439373134323962636334643331383761396333326565306635 Jan 22 00:35:24.992000 audit: BPF prog-id=208 op=UNLOAD Jan 22 00:35:24.992000 audit[4426]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4332 pid=4426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:24.992000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337343439373134323962636334643331383761396333326565306635 Jan 22 00:35:24.992000 audit: BPF prog-id=209 op=LOAD Jan 22 00:35:24.992000 audit[4426]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=4332 pid=4426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:24.992000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337343439373134323962636334643331383761396333326565306635 Jan 22 00:35:24.993000 audit: BPF prog-id=210 op=LOAD Jan 22 00:35:24.993000 audit[4426]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=4332 pid=4426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:24.993000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337343439373134323962636334643331383761396333326565306635 Jan 22 00:35:24.995000 audit: BPF prog-id=210 op=UNLOAD Jan 22 00:35:24.995000 audit[4426]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4332 pid=4426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:24.995000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337343439373134323962636334643331383761396333326565306635 Jan 22 00:35:24.995000 audit: BPF prog-id=209 op=UNLOAD Jan 22 00:35:24.995000 audit[4426]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4332 pid=4426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:24.995000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337343439373134323962636334643331383761396333326565306635 Jan 22 00:35:24.995000 audit: BPF prog-id=211 op=LOAD Jan 22 00:35:24.995000 audit[4426]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=4332 pid=4426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:24.995000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337343439373134323962636334643331383761396333326565306635 Jan 22 00:35:25.043652 systemd[1]: Started cri-containerd-711fe13316e115d4a3ac2c44bd09eb1d9cda753fb2ac82bdb74927d1c7e3e739.scope - libcontainer container 711fe13316e115d4a3ac2c44bd09eb1d9cda753fb2ac82bdb74927d1c7e3e739. Jan 22 00:35:25.067000 audit: BPF prog-id=212 op=LOAD Jan 22 00:35:25.072000 audit: BPF prog-id=213 op=LOAD Jan 22 00:35:25.072000 audit[4465]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4445 pid=4465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:25.072000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731316665313333313665313135643461336163326334346264303965 Jan 22 00:35:25.072000 audit: BPF prog-id=213 op=UNLOAD Jan 22 00:35:25.072000 audit[4465]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4445 pid=4465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:25.072000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731316665313333313665313135643461336163326334346264303965 Jan 22 00:35:25.073389 containerd[1629]: time="2026-01-22T00:35:25.071862106Z" level=info msg="StartContainer for \"3744971429bcc4d3187a9c32ee0f596986cbbe67dfda3abbdfe856930f603ff5\" returns successfully" Jan 22 00:35:25.073000 audit: BPF prog-id=214 op=LOAD Jan 22 00:35:25.073000 audit[4465]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4445 pid=4465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:25.073000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731316665313333313665313135643461336163326334346264303965 Jan 22 00:35:25.073000 audit: BPF prog-id=215 op=LOAD Jan 22 00:35:25.073000 audit[4465]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4445 pid=4465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:25.073000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731316665313333313665313135643461336163326334346264303965 Jan 22 00:35:25.073000 audit: BPF prog-id=215 op=UNLOAD Jan 22 00:35:25.073000 audit[4465]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4445 pid=4465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:25.073000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731316665313333313665313135643461336163326334346264303965 Jan 22 00:35:25.073000 audit: BPF prog-id=214 op=UNLOAD Jan 22 00:35:25.073000 audit[4465]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4445 pid=4465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:25.073000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731316665313333313665313135643461336163326334346264303965 Jan 22 00:35:25.073000 audit: BPF prog-id=216 op=LOAD Jan 22 00:35:25.073000 audit[4465]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4445 pid=4465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:25.073000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731316665313333313665313135643461336163326334346264303965 Jan 22 00:35:25.107246 containerd[1629]: time="2026-01-22T00:35:25.107176070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-jv75z,Uid:7058f0b0-e750-4a4f-832e-cf58713e25a5,Namespace:calico-system,Attempt:0,} returns sandbox id \"5a26531a8b30dffcedcf203aace05a292dc64684f836f5532e2dc1b47633ba70\"" Jan 22 00:35:25.110039 containerd[1629]: time="2026-01-22T00:35:25.110002227Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 22 00:35:25.185623 containerd[1629]: time="2026-01-22T00:35:25.185564041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-b6ffb,Uid:38af0d26-1bef-4093-8245-e7a246914084,Namespace:kube-system,Attempt:0,} returns sandbox id \"711fe13316e115d4a3ac2c44bd09eb1d9cda753fb2ac82bdb74927d1c7e3e739\"" Jan 22 00:35:25.186261 kubelet[2805]: E0122 00:35:25.186224 2805 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 22 00:35:25.190340 containerd[1629]: time="2026-01-22T00:35:25.190310093Z" level=info msg="CreateContainer within sandbox \"711fe13316e115d4a3ac2c44bd09eb1d9cda753fb2ac82bdb74927d1c7e3e739\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 22 00:35:25.197999 containerd[1629]: time="2026-01-22T00:35:25.197955767Z" level=info msg="Container 8ae1a69b18e85f8096f2284c248ac367a7f2aa9c06f1b249059a88aa69174d28: CDI devices from CRI Config.CDIDevices: []" Jan 22 00:35:25.218423 containerd[1629]: time="2026-01-22T00:35:25.218191321Z" level=info msg="CreateContainer within sandbox \"711fe13316e115d4a3ac2c44bd09eb1d9cda753fb2ac82bdb74927d1c7e3e739\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8ae1a69b18e85f8096f2284c248ac367a7f2aa9c06f1b249059a88aa69174d28\"" Jan 22 00:35:25.219801 containerd[1629]: time="2026-01-22T00:35:25.219720254Z" level=info msg="StartContainer for \"8ae1a69b18e85f8096f2284c248ac367a7f2aa9c06f1b249059a88aa69174d28\"" Jan 22 00:35:25.220660 containerd[1629]: time="2026-01-22T00:35:25.220625246Z" level=info msg="connecting to shim 8ae1a69b18e85f8096f2284c248ac367a7f2aa9c06f1b249059a88aa69174d28" address="unix:///run/containerd/s/f2f48712019008cb08adc80a73fca94a98b7bc17e7649d2a59f99e4c35286088" protocol=ttrpc version=3 Jan 22 00:35:25.251853 systemd[1]: Started cri-containerd-8ae1a69b18e85f8096f2284c248ac367a7f2aa9c06f1b249059a88aa69174d28.scope - libcontainer container 8ae1a69b18e85f8096f2284c248ac367a7f2aa9c06f1b249059a88aa69174d28. Jan 22 00:35:25.268392 containerd[1629]: time="2026-01-22T00:35:25.268351217Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:35:25.270145 containerd[1629]: time="2026-01-22T00:35:25.270051853Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 22 00:35:25.270243 containerd[1629]: time="2026-01-22T00:35:25.270226407Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 22 00:35:25.271091 kubelet[2805]: E0122 00:35:25.271040 2805 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 22 00:35:25.272638 kubelet[2805]: E0122 00:35:25.271393 2805 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 22 00:35:25.272638 kubelet[2805]: E0122 00:35:25.271481 2805 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-jv75z_calico-system(7058f0b0-e750-4a4f-832e-cf58713e25a5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 22 00:35:25.272638 kubelet[2805]: E0122 00:35:25.272325 2805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-jv75z" podUID="7058f0b0-e750-4a4f-832e-cf58713e25a5" Jan 22 00:35:25.275000 audit: BPF prog-id=217 op=LOAD Jan 22 00:35:25.276000 audit: BPF prog-id=218 op=LOAD Jan 22 00:35:25.276000 audit[4524]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=4445 pid=4524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:25.276000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861653161363962313865383566383039366632323834633234386163 Jan 22 00:35:25.276000 audit: BPF prog-id=218 op=UNLOAD Jan 22 00:35:25.276000 audit[4524]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4445 pid=4524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:25.276000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861653161363962313865383566383039366632323834633234386163 Jan 22 00:35:25.276000 audit: BPF prog-id=219 op=LOAD Jan 22 00:35:25.276000 audit[4524]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=4445 pid=4524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:25.276000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861653161363962313865383566383039366632323834633234386163 Jan 22 00:35:25.277000 audit: BPF prog-id=220 op=LOAD Jan 22 00:35:25.277000 audit[4524]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=4445 pid=4524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:25.277000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861653161363962313865383566383039366632323834633234386163 Jan 22 00:35:25.277000 audit: BPF prog-id=220 op=UNLOAD Jan 22 00:35:25.277000 audit[4524]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4445 pid=4524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:25.277000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861653161363962313865383566383039366632323834633234386163 Jan 22 00:35:25.277000 audit: BPF prog-id=219 op=UNLOAD Jan 22 00:35:25.277000 audit[4524]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4445 pid=4524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:25.277000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861653161363962313865383566383039366632323834633234386163 Jan 22 00:35:25.277000 audit: BPF prog-id=221 op=LOAD Jan 22 00:35:25.277000 audit[4524]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=4445 pid=4524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:25.277000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861653161363962313865383566383039366632323834633234386163 Jan 22 00:35:25.311454 containerd[1629]: time="2026-01-22T00:35:25.311053434Z" level=info msg="StartContainer for \"8ae1a69b18e85f8096f2284c248ac367a7f2aa9c06f1b249059a88aa69174d28\" returns successfully" Jan 22 00:35:25.424445 containerd[1629]: time="2026-01-22T00:35:25.423917233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bbb7b878c-n8l2v,Uid:53fd5176-64d2-4a70-9167-8081a837fe6e,Namespace:calico-apiserver,Attempt:0,}" Jan 22 00:35:25.425492 containerd[1629]: time="2026-01-22T00:35:25.425462786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wm244,Uid:ae45d206-785e-4662-9efc-4b0987941483,Namespace:calico-system,Attempt:0,}" Jan 22 00:35:25.568887 systemd-networkd[1515]: cali118dd7f1911: Link UP Jan 22 00:35:25.570494 systemd-networkd[1515]: cali118dd7f1911: Gained carrier Jan 22 00:35:25.584573 containerd[1629]: 2026-01-22 00:35:25.478 [INFO][4559] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 22 00:35:25.584573 containerd[1629]: 2026-01-22 00:35:25.490 [INFO][4559] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--232--4--171-k8s-calico--apiserver--bbb7b878c--n8l2v-eth0 calico-apiserver-bbb7b878c- calico-apiserver 53fd5176-64d2-4a70-9167-8081a837fe6e 824 0 2026-01-22 00:35:00 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:bbb7b878c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-232-4-171 calico-apiserver-bbb7b878c-n8l2v eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali118dd7f1911 [] [] }} ContainerID="ea7498de6780bd93c358ee86b9142ffdec0e56844c2fba098b2ff34b9a2aaf3a" Namespace="calico-apiserver" Pod="calico-apiserver-bbb7b878c-n8l2v" WorkloadEndpoint="172--232--4--171-k8s-calico--apiserver--bbb7b878c--n8l2v-" Jan 22 00:35:25.584573 containerd[1629]: 2026-01-22 00:35:25.490 [INFO][4559] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ea7498de6780bd93c358ee86b9142ffdec0e56844c2fba098b2ff34b9a2aaf3a" Namespace="calico-apiserver" Pod="calico-apiserver-bbb7b878c-n8l2v" WorkloadEndpoint="172--232--4--171-k8s-calico--apiserver--bbb7b878c--n8l2v-eth0" Jan 22 00:35:25.584573 containerd[1629]: 2026-01-22 00:35:25.518 [INFO][4588] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ea7498de6780bd93c358ee86b9142ffdec0e56844c2fba098b2ff34b9a2aaf3a" HandleID="k8s-pod-network.ea7498de6780bd93c358ee86b9142ffdec0e56844c2fba098b2ff34b9a2aaf3a" Workload="172--232--4--171-k8s-calico--apiserver--bbb7b878c--n8l2v-eth0" Jan 22 00:35:25.584573 containerd[1629]: 2026-01-22 00:35:25.518 [INFO][4588] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ea7498de6780bd93c358ee86b9142ffdec0e56844c2fba098b2ff34b9a2aaf3a" HandleID="k8s-pod-network.ea7498de6780bd93c358ee86b9142ffdec0e56844c2fba098b2ff34b9a2aaf3a" Workload="172--232--4--171-k8s-calico--apiserver--bbb7b878c--n8l2v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5030), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-232-4-171", "pod":"calico-apiserver-bbb7b878c-n8l2v", "timestamp":"2026-01-22 00:35:25.518708851 +0000 UTC"}, Hostname:"172-232-4-171", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 22 00:35:25.584573 containerd[1629]: 2026-01-22 00:35:25.518 [INFO][4588] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 22 00:35:25.584573 containerd[1629]: 2026-01-22 00:35:25.518 [INFO][4588] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 22 00:35:25.584573 containerd[1629]: 2026-01-22 00:35:25.518 [INFO][4588] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-232-4-171' Jan 22 00:35:25.584573 containerd[1629]: 2026-01-22 00:35:25.535 [INFO][4588] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ea7498de6780bd93c358ee86b9142ffdec0e56844c2fba098b2ff34b9a2aaf3a" host="172-232-4-171" Jan 22 00:35:25.584573 containerd[1629]: 2026-01-22 00:35:25.543 [INFO][4588] ipam/ipam.go 394: Looking up existing affinities for host host="172-232-4-171" Jan 22 00:35:25.584573 containerd[1629]: 2026-01-22 00:35:25.547 [INFO][4588] ipam/ipam.go 511: Trying affinity for 192.168.71.64/26 host="172-232-4-171" Jan 22 00:35:25.584573 containerd[1629]: 2026-01-22 00:35:25.548 [INFO][4588] ipam/ipam.go 158: Attempting to load block cidr=192.168.71.64/26 host="172-232-4-171" Jan 22 00:35:25.584573 containerd[1629]: 2026-01-22 00:35:25.550 [INFO][4588] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.71.64/26 host="172-232-4-171" Jan 22 00:35:25.584573 containerd[1629]: 2026-01-22 00:35:25.550 [INFO][4588] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.71.64/26 handle="k8s-pod-network.ea7498de6780bd93c358ee86b9142ffdec0e56844c2fba098b2ff34b9a2aaf3a" host="172-232-4-171" Jan 22 00:35:25.584573 containerd[1629]: 2026-01-22 00:35:25.552 [INFO][4588] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ea7498de6780bd93c358ee86b9142ffdec0e56844c2fba098b2ff34b9a2aaf3a Jan 22 00:35:25.584573 containerd[1629]: 2026-01-22 00:35:25.555 [INFO][4588] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.71.64/26 handle="k8s-pod-network.ea7498de6780bd93c358ee86b9142ffdec0e56844c2fba098b2ff34b9a2aaf3a" host="172-232-4-171" Jan 22 00:35:25.584573 containerd[1629]: 2026-01-22 00:35:25.559 [INFO][4588] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.71.70/26] block=192.168.71.64/26 handle="k8s-pod-network.ea7498de6780bd93c358ee86b9142ffdec0e56844c2fba098b2ff34b9a2aaf3a" host="172-232-4-171" Jan 22 00:35:25.584573 containerd[1629]: 2026-01-22 00:35:25.559 [INFO][4588] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.71.70/26] handle="k8s-pod-network.ea7498de6780bd93c358ee86b9142ffdec0e56844c2fba098b2ff34b9a2aaf3a" host="172-232-4-171" Jan 22 00:35:25.584573 containerd[1629]: 2026-01-22 00:35:25.559 [INFO][4588] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 22 00:35:25.584573 containerd[1629]: 2026-01-22 00:35:25.559 [INFO][4588] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.71.70/26] IPv6=[] ContainerID="ea7498de6780bd93c358ee86b9142ffdec0e56844c2fba098b2ff34b9a2aaf3a" HandleID="k8s-pod-network.ea7498de6780bd93c358ee86b9142ffdec0e56844c2fba098b2ff34b9a2aaf3a" Workload="172--232--4--171-k8s-calico--apiserver--bbb7b878c--n8l2v-eth0" Jan 22 00:35:25.585377 containerd[1629]: 2026-01-22 00:35:25.562 [INFO][4559] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ea7498de6780bd93c358ee86b9142ffdec0e56844c2fba098b2ff34b9a2aaf3a" Namespace="calico-apiserver" Pod="calico-apiserver-bbb7b878c-n8l2v" WorkloadEndpoint="172--232--4--171-k8s-calico--apiserver--bbb7b878c--n8l2v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--4--171-k8s-calico--apiserver--bbb7b878c--n8l2v-eth0", GenerateName:"calico-apiserver-bbb7b878c-", Namespace:"calico-apiserver", SelfLink:"", UID:"53fd5176-64d2-4a70-9167-8081a837fe6e", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2026, time.January, 22, 0, 35, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bbb7b878c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-4-171", ContainerID:"", Pod:"calico-apiserver-bbb7b878c-n8l2v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.71.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali118dd7f1911", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 22 00:35:25.585377 containerd[1629]: 2026-01-22 00:35:25.563 [INFO][4559] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.71.70/32] ContainerID="ea7498de6780bd93c358ee86b9142ffdec0e56844c2fba098b2ff34b9a2aaf3a" Namespace="calico-apiserver" Pod="calico-apiserver-bbb7b878c-n8l2v" WorkloadEndpoint="172--232--4--171-k8s-calico--apiserver--bbb7b878c--n8l2v-eth0" Jan 22 00:35:25.585377 containerd[1629]: 2026-01-22 00:35:25.563 [INFO][4559] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali118dd7f1911 ContainerID="ea7498de6780bd93c358ee86b9142ffdec0e56844c2fba098b2ff34b9a2aaf3a" Namespace="calico-apiserver" Pod="calico-apiserver-bbb7b878c-n8l2v" WorkloadEndpoint="172--232--4--171-k8s-calico--apiserver--bbb7b878c--n8l2v-eth0" Jan 22 00:35:25.585377 containerd[1629]: 2026-01-22 00:35:25.572 [INFO][4559] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ea7498de6780bd93c358ee86b9142ffdec0e56844c2fba098b2ff34b9a2aaf3a" Namespace="calico-apiserver" Pod="calico-apiserver-bbb7b878c-n8l2v" WorkloadEndpoint="172--232--4--171-k8s-calico--apiserver--bbb7b878c--n8l2v-eth0" Jan 22 00:35:25.585377 containerd[1629]: 2026-01-22 00:35:25.575 [INFO][4559] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ea7498de6780bd93c358ee86b9142ffdec0e56844c2fba098b2ff34b9a2aaf3a" Namespace="calico-apiserver" Pod="calico-apiserver-bbb7b878c-n8l2v" WorkloadEndpoint="172--232--4--171-k8s-calico--apiserver--bbb7b878c--n8l2v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--4--171-k8s-calico--apiserver--bbb7b878c--n8l2v-eth0", GenerateName:"calico-apiserver-bbb7b878c-", Namespace:"calico-apiserver", SelfLink:"", UID:"53fd5176-64d2-4a70-9167-8081a837fe6e", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2026, time.January, 22, 0, 35, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bbb7b878c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-4-171", ContainerID:"ea7498de6780bd93c358ee86b9142ffdec0e56844c2fba098b2ff34b9a2aaf3a", Pod:"calico-apiserver-bbb7b878c-n8l2v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.71.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali118dd7f1911", MAC:"36:62:f5:ae:1c:4c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 22 00:35:25.585377 containerd[1629]: 2026-01-22 00:35:25.581 [INFO][4559] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ea7498de6780bd93c358ee86b9142ffdec0e56844c2fba098b2ff34b9a2aaf3a" Namespace="calico-apiserver" Pod="calico-apiserver-bbb7b878c-n8l2v" WorkloadEndpoint="172--232--4--171-k8s-calico--apiserver--bbb7b878c--n8l2v-eth0" Jan 22 00:35:25.609015 containerd[1629]: time="2026-01-22T00:35:25.608986036Z" level=info msg="connecting to shim ea7498de6780bd93c358ee86b9142ffdec0e56844c2fba098b2ff34b9a2aaf3a" address="unix:///run/containerd/s/9e4e7b1a6996f94c815f4022256f91c506e9353bf959f826b4b2abf20166d95c" namespace=k8s.io protocol=ttrpc version=3 Jan 22 00:35:25.614937 kubelet[2805]: E0122 00:35:25.614906 2805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-jv75z" podUID="7058f0b0-e750-4a4f-832e-cf58713e25a5" Jan 22 00:35:25.630293 kubelet[2805]: E0122 00:35:25.629476 2805 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 22 00:35:25.640579 kubelet[2805]: E0122 00:35:25.640551 2805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-bbb7b878c-c279j" podUID="ac5cf267-1601-4ae5-91e1-dc1496ea695f" Jan 22 00:35:25.644222 kubelet[2805]: E0122 00:35:25.640723 2805 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 22 00:35:25.659984 systemd[1]: Started cri-containerd-ea7498de6780bd93c358ee86b9142ffdec0e56844c2fba098b2ff34b9a2aaf3a.scope - libcontainer container ea7498de6780bd93c358ee86b9142ffdec0e56844c2fba098b2ff34b9a2aaf3a. Jan 22 00:35:25.696000 audit[4644]: NETFILTER_CFG table=filter:121 family=2 entries=22 op=nft_register_rule pid=4644 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:35:25.696000 audit[4644]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fff242f1240 a2=0 a3=7fff242f122c items=0 ppid=2914 pid=4644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:25.696000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:35:25.700434 kubelet[2805]: I0122 00:35:25.700377 2805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-v7rsm" podStartSLOduration=34.70035973 podStartE2EDuration="34.70035973s" podCreationTimestamp="2026-01-22 00:34:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 00:35:25.665233121 +0000 UTC m=+40.387131307" watchObservedRunningTime="2026-01-22 00:35:25.70035973 +0000 UTC m=+40.422257916" Jan 22 00:35:25.701000 audit[4644]: NETFILTER_CFG table=nat:122 family=2 entries=12 op=nft_register_rule pid=4644 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:35:25.701000 audit[4644]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff242f1240 a2=0 a3=0 items=0 ppid=2914 pid=4644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:25.701000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:35:25.705000 audit: BPF prog-id=222 op=LOAD Jan 22 00:35:25.706000 audit: BPF prog-id=223 op=LOAD Jan 22 00:35:25.706000 audit[4621]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4610 pid=4621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:25.706000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561373439386465363738306264393363333538656538366239313432 Jan 22 00:35:25.706000 audit: BPF prog-id=223 op=UNLOAD Jan 22 00:35:25.706000 audit[4621]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4610 pid=4621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:25.706000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561373439386465363738306264393363333538656538366239313432 Jan 22 00:35:25.706000 audit: BPF prog-id=224 op=LOAD Jan 22 00:35:25.706000 audit[4621]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4610 pid=4621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:25.706000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561373439386465363738306264393363333538656538366239313432 Jan 22 00:35:25.706000 audit: BPF prog-id=225 op=LOAD Jan 22 00:35:25.706000 audit[4621]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4610 pid=4621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:25.706000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561373439386465363738306264393363333538656538366239313432 Jan 22 00:35:25.707000 audit: BPF prog-id=225 op=UNLOAD Jan 22 00:35:25.707000 audit[4621]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4610 pid=4621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:25.707000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561373439386465363738306264393363333538656538366239313432 Jan 22 00:35:25.707000 audit: BPF prog-id=224 op=UNLOAD Jan 22 00:35:25.707000 audit[4621]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4610 pid=4621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:25.707000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561373439386465363738306264393363333538656538366239313432 Jan 22 00:35:25.707000 audit: BPF prog-id=226 op=LOAD Jan 22 00:35:25.707000 audit[4621]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4610 pid=4621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:25.707000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561373439386465363738306264393363333538656538366239313432 Jan 22 00:35:25.715804 systemd-networkd[1515]: cali5355a12d2d8: Gained IPv6LL Jan 22 00:35:25.731361 kubelet[2805]: I0122 00:35:25.731318 2805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-b6ffb" podStartSLOduration=34.731300943 podStartE2EDuration="34.731300943s" podCreationTimestamp="2026-01-22 00:34:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 00:35:25.701455228 +0000 UTC m=+40.423353434" watchObservedRunningTime="2026-01-22 00:35:25.731300943 +0000 UTC m=+40.453199129" Jan 22 00:35:25.767144 systemd-networkd[1515]: cali1d4a663e250: Link UP Jan 22 00:35:25.769013 systemd-networkd[1515]: cali1d4a663e250: Gained carrier Jan 22 00:35:25.792467 containerd[1629]: time="2026-01-22T00:35:25.792434660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bbb7b878c-n8l2v,Uid:53fd5176-64d2-4a70-9167-8081a837fe6e,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"ea7498de6780bd93c358ee86b9142ffdec0e56844c2fba098b2ff34b9a2aaf3a\"" Jan 22 00:35:25.797583 containerd[1629]: 2026-01-22 00:35:25.468 [INFO][4564] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 22 00:35:25.797583 containerd[1629]: 2026-01-22 00:35:25.484 [INFO][4564] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--232--4--171-k8s-csi--node--driver--wm244-eth0 csi-node-driver- calico-system ae45d206-785e-4662-9efc-4b0987941483 713 0 2026-01-22 00:35:05 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172-232-4-171 csi-node-driver-wm244 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali1d4a663e250 [] [] }} ContainerID="4ad95026d9c2f4cc88392544036af84d97190210f3b9ea0bfc428be7d0fac591" Namespace="calico-system" Pod="csi-node-driver-wm244" WorkloadEndpoint="172--232--4--171-k8s-csi--node--driver--wm244-" Jan 22 00:35:25.797583 containerd[1629]: 2026-01-22 00:35:25.484 [INFO][4564] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4ad95026d9c2f4cc88392544036af84d97190210f3b9ea0bfc428be7d0fac591" Namespace="calico-system" Pod="csi-node-driver-wm244" WorkloadEndpoint="172--232--4--171-k8s-csi--node--driver--wm244-eth0" Jan 22 00:35:25.797583 containerd[1629]: 2026-01-22 00:35:25.540 [INFO][4583] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4ad95026d9c2f4cc88392544036af84d97190210f3b9ea0bfc428be7d0fac591" HandleID="k8s-pod-network.4ad95026d9c2f4cc88392544036af84d97190210f3b9ea0bfc428be7d0fac591" Workload="172--232--4--171-k8s-csi--node--driver--wm244-eth0" Jan 22 00:35:25.797583 containerd[1629]: 2026-01-22 00:35:25.540 [INFO][4583] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4ad95026d9c2f4cc88392544036af84d97190210f3b9ea0bfc428be7d0fac591" HandleID="k8s-pod-network.4ad95026d9c2f4cc88392544036af84d97190210f3b9ea0bfc428be7d0fac591" Workload="172--232--4--171-k8s-csi--node--driver--wm244-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003157b0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-232-4-171", "pod":"csi-node-driver-wm244", "timestamp":"2026-01-22 00:35:25.540549514 +0000 UTC"}, Hostname:"172-232-4-171", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 22 00:35:25.797583 containerd[1629]: 2026-01-22 00:35:25.541 [INFO][4583] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 22 00:35:25.797583 containerd[1629]: 2026-01-22 00:35:25.560 [INFO][4583] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 22 00:35:25.797583 containerd[1629]: 2026-01-22 00:35:25.560 [INFO][4583] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-232-4-171' Jan 22 00:35:25.797583 containerd[1629]: 2026-01-22 00:35:25.642 [INFO][4583] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4ad95026d9c2f4cc88392544036af84d97190210f3b9ea0bfc428be7d0fac591" host="172-232-4-171" Jan 22 00:35:25.797583 containerd[1629]: 2026-01-22 00:35:25.665 [INFO][4583] ipam/ipam.go 394: Looking up existing affinities for host host="172-232-4-171" Jan 22 00:35:25.797583 containerd[1629]: 2026-01-22 00:35:25.696 [INFO][4583] ipam/ipam.go 511: Trying affinity for 192.168.71.64/26 host="172-232-4-171" Jan 22 00:35:25.797583 containerd[1629]: 2026-01-22 00:35:25.698 [INFO][4583] ipam/ipam.go 158: Attempting to load block cidr=192.168.71.64/26 host="172-232-4-171" Jan 22 00:35:25.797583 containerd[1629]: 2026-01-22 00:35:25.706 [INFO][4583] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.71.64/26 host="172-232-4-171" Jan 22 00:35:25.797583 containerd[1629]: 2026-01-22 00:35:25.706 [INFO][4583] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.71.64/26 handle="k8s-pod-network.4ad95026d9c2f4cc88392544036af84d97190210f3b9ea0bfc428be7d0fac591" host="172-232-4-171" Jan 22 00:35:25.797583 containerd[1629]: 2026-01-22 00:35:25.708 [INFO][4583] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4ad95026d9c2f4cc88392544036af84d97190210f3b9ea0bfc428be7d0fac591 Jan 22 00:35:25.797583 containerd[1629]: 2026-01-22 00:35:25.731 [INFO][4583] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.71.64/26 handle="k8s-pod-network.4ad95026d9c2f4cc88392544036af84d97190210f3b9ea0bfc428be7d0fac591" host="172-232-4-171" Jan 22 00:35:25.797583 containerd[1629]: 2026-01-22 00:35:25.755 [INFO][4583] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.71.71/26] block=192.168.71.64/26 handle="k8s-pod-network.4ad95026d9c2f4cc88392544036af84d97190210f3b9ea0bfc428be7d0fac591" host="172-232-4-171" Jan 22 00:35:25.797583 containerd[1629]: 2026-01-22 00:35:25.755 [INFO][4583] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.71.71/26] handle="k8s-pod-network.4ad95026d9c2f4cc88392544036af84d97190210f3b9ea0bfc428be7d0fac591" host="172-232-4-171" Jan 22 00:35:25.797583 containerd[1629]: 2026-01-22 00:35:25.755 [INFO][4583] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 22 00:35:25.797583 containerd[1629]: 2026-01-22 00:35:25.755 [INFO][4583] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.71.71/26] IPv6=[] ContainerID="4ad95026d9c2f4cc88392544036af84d97190210f3b9ea0bfc428be7d0fac591" HandleID="k8s-pod-network.4ad95026d9c2f4cc88392544036af84d97190210f3b9ea0bfc428be7d0fac591" Workload="172--232--4--171-k8s-csi--node--driver--wm244-eth0" Jan 22 00:35:25.798307 containerd[1629]: 2026-01-22 00:35:25.761 [INFO][4564] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4ad95026d9c2f4cc88392544036af84d97190210f3b9ea0bfc428be7d0fac591" Namespace="calico-system" Pod="csi-node-driver-wm244" WorkloadEndpoint="172--232--4--171-k8s-csi--node--driver--wm244-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--4--171-k8s-csi--node--driver--wm244-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ae45d206-785e-4662-9efc-4b0987941483", ResourceVersion:"713", Generation:0, CreationTimestamp:time.Date(2026, time.January, 22, 0, 35, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-4-171", ContainerID:"", Pod:"csi-node-driver-wm244", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.71.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1d4a663e250", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 22 00:35:25.798307 containerd[1629]: 2026-01-22 00:35:25.762 [INFO][4564] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.71.71/32] ContainerID="4ad95026d9c2f4cc88392544036af84d97190210f3b9ea0bfc428be7d0fac591" Namespace="calico-system" Pod="csi-node-driver-wm244" WorkloadEndpoint="172--232--4--171-k8s-csi--node--driver--wm244-eth0" Jan 22 00:35:25.798307 containerd[1629]: 2026-01-22 00:35:25.762 [INFO][4564] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1d4a663e250 ContainerID="4ad95026d9c2f4cc88392544036af84d97190210f3b9ea0bfc428be7d0fac591" Namespace="calico-system" Pod="csi-node-driver-wm244" WorkloadEndpoint="172--232--4--171-k8s-csi--node--driver--wm244-eth0" Jan 22 00:35:25.798307 containerd[1629]: 2026-01-22 00:35:25.776 [INFO][4564] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4ad95026d9c2f4cc88392544036af84d97190210f3b9ea0bfc428be7d0fac591" Namespace="calico-system" Pod="csi-node-driver-wm244" WorkloadEndpoint="172--232--4--171-k8s-csi--node--driver--wm244-eth0" Jan 22 00:35:25.798307 containerd[1629]: 2026-01-22 00:35:25.777 [INFO][4564] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4ad95026d9c2f4cc88392544036af84d97190210f3b9ea0bfc428be7d0fac591" Namespace="calico-system" Pod="csi-node-driver-wm244" WorkloadEndpoint="172--232--4--171-k8s-csi--node--driver--wm244-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--4--171-k8s-csi--node--driver--wm244-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ae45d206-785e-4662-9efc-4b0987941483", ResourceVersion:"713", Generation:0, CreationTimestamp:time.Date(2026, time.January, 22, 0, 35, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-4-171", ContainerID:"4ad95026d9c2f4cc88392544036af84d97190210f3b9ea0bfc428be7d0fac591", Pod:"csi-node-driver-wm244", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.71.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1d4a663e250", MAC:"22:02:95:8c:80:75", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 22 00:35:25.798307 containerd[1629]: 2026-01-22 00:35:25.793 [INFO][4564] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4ad95026d9c2f4cc88392544036af84d97190210f3b9ea0bfc428be7d0fac591" Namespace="calico-system" Pod="csi-node-driver-wm244" WorkloadEndpoint="172--232--4--171-k8s-csi--node--driver--wm244-eth0" Jan 22 00:35:25.800599 containerd[1629]: time="2026-01-22T00:35:25.799965254Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 22 00:35:25.832448 containerd[1629]: time="2026-01-22T00:35:25.831080292Z" level=info msg="connecting to shim 4ad95026d9c2f4cc88392544036af84d97190210f3b9ea0bfc428be7d0fac591" address="unix:///run/containerd/s/1c99deb4ab9e46bfb329b2f0470edc55ff8f65c20ef8b8963612fc785d2e9c10" namespace=k8s.io protocol=ttrpc version=3 Jan 22 00:35:25.870675 systemd[1]: Started cri-containerd-4ad95026d9c2f4cc88392544036af84d97190210f3b9ea0bfc428be7d0fac591.scope - libcontainer container 4ad95026d9c2f4cc88392544036af84d97190210f3b9ea0bfc428be7d0fac591. Jan 22 00:35:25.880000 audit: BPF prog-id=227 op=LOAD Jan 22 00:35:25.881000 audit: BPF prog-id=228 op=LOAD Jan 22 00:35:25.881000 audit[4678]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=4667 pid=4678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:25.881000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461643935303236643963326634636338383339323534343033366166 Jan 22 00:35:25.881000 audit: BPF prog-id=228 op=UNLOAD Jan 22 00:35:25.881000 audit[4678]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4667 pid=4678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:25.881000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461643935303236643963326634636338383339323534343033366166 Jan 22 00:35:25.881000 audit: BPF prog-id=229 op=LOAD Jan 22 00:35:25.881000 audit[4678]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=4667 pid=4678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:25.881000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461643935303236643963326634636338383339323534343033366166 Jan 22 00:35:25.881000 audit: BPF prog-id=230 op=LOAD Jan 22 00:35:25.881000 audit[4678]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=4667 pid=4678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:25.881000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461643935303236643963326634636338383339323534343033366166 Jan 22 00:35:25.881000 audit: BPF prog-id=230 op=UNLOAD Jan 22 00:35:25.881000 audit[4678]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4667 pid=4678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:25.881000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461643935303236643963326634636338383339323534343033366166 Jan 22 00:35:25.881000 audit: BPF prog-id=229 op=UNLOAD Jan 22 00:35:25.881000 audit[4678]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4667 pid=4678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:25.881000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461643935303236643963326634636338383339323534343033366166 Jan 22 00:35:25.881000 audit: BPF prog-id=231 op=LOAD Jan 22 00:35:25.881000 audit[4678]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=4667 pid=4678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:25.881000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461643935303236643963326634636338383339323534343033366166 Jan 22 00:35:25.909136 systemd-networkd[1515]: calic99323f2cb7: Gained IPv6LL Jan 22 00:35:25.916448 containerd[1629]: time="2026-01-22T00:35:25.916421772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wm244,Uid:ae45d206-785e-4662-9efc-4b0987941483,Namespace:calico-system,Attempt:0,} returns sandbox id \"4ad95026d9c2f4cc88392544036af84d97190210f3b9ea0bfc428be7d0fac591\"" Jan 22 00:35:25.931378 containerd[1629]: time="2026-01-22T00:35:25.931327308Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:35:25.932206 containerd[1629]: time="2026-01-22T00:35:25.932176625Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 22 00:35:25.932282 containerd[1629]: time="2026-01-22T00:35:25.932229080Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 22 00:35:25.932363 kubelet[2805]: E0122 00:35:25.932336 2805 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 22 00:35:25.932404 kubelet[2805]: E0122 00:35:25.932366 2805 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 22 00:35:25.932532 kubelet[2805]: E0122 00:35:25.932473 2805 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-bbb7b878c-n8l2v_calico-apiserver(53fd5176-64d2-4a70-9167-8081a837fe6e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 22 00:35:25.932826 containerd[1629]: time="2026-01-22T00:35:25.932807607Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 22 00:35:25.933035 kubelet[2805]: E0122 00:35:25.932977 2805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-bbb7b878c-n8l2v" podUID="53fd5176-64d2-4a70-9167-8081a837fe6e" Jan 22 00:35:26.058152 containerd[1629]: time="2026-01-22T00:35:26.058123109Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:35:26.058915 containerd[1629]: time="2026-01-22T00:35:26.058886039Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 22 00:35:26.058915 containerd[1629]: time="2026-01-22T00:35:26.058933053Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 22 00:35:26.059060 kubelet[2805]: E0122 00:35:26.059029 2805 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 22 00:35:26.059060 kubelet[2805]: E0122 00:35:26.059051 2805 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 22 00:35:26.059288 kubelet[2805]: E0122 00:35:26.059108 2805 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-wm244_calico-system(ae45d206-785e-4662-9efc-4b0987941483): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 22 00:35:26.060172 containerd[1629]: time="2026-01-22T00:35:26.060156747Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 22 00:35:26.190772 containerd[1629]: time="2026-01-22T00:35:26.190679797Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:35:26.191619 containerd[1629]: time="2026-01-22T00:35:26.191584637Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 22 00:35:26.191700 containerd[1629]: time="2026-01-22T00:35:26.191645252Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 22 00:35:26.191826 kubelet[2805]: E0122 00:35:26.191797 2805 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 22 00:35:26.192865 kubelet[2805]: E0122 00:35:26.191827 2805 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 22 00:35:26.192865 kubelet[2805]: E0122 00:35:26.191898 2805 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-wm244_calico-system(ae45d206-785e-4662-9efc-4b0987941483): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 22 00:35:26.192865 kubelet[2805]: E0122 00:35:26.192142 2805 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wm244" podUID="ae45d206-785e-4662-9efc-4b0987941483" Jan 22 00:35:26.648728 kubelet[2805]: E0122 00:35:26.648652 2805 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wm244" podUID="ae45d206-785e-4662-9efc-4b0987941483" Jan 22 00:35:26.652456 kubelet[2805]: E0122 00:35:26.652073 2805 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 22 00:35:26.652575 kubelet[2805]: E0122 00:35:26.652548 2805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-jv75z" podUID="7058f0b0-e750-4a4f-832e-cf58713e25a5" Jan 22 00:35:26.652634 kubelet[2805]: E0122 00:35:26.652615 2805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-bbb7b878c-n8l2v" podUID="53fd5176-64d2-4a70-9167-8081a837fe6e" Jan 22 00:35:26.652865 kubelet[2805]: E0122 00:35:26.652828 2805 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 22 00:35:26.714000 audit[4726]: NETFILTER_CFG table=filter:123 family=2 entries=19 op=nft_register_rule pid=4726 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:35:26.714000 audit[4726]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff82266b10 a2=0 a3=7fff82266afc items=0 ppid=2914 pid=4726 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:26.714000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:35:26.730000 audit[4726]: NETFILTER_CFG table=nat:124 family=2 entries=45 op=nft_register_chain pid=4726 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:35:26.730000 audit[4726]: SYSCALL arch=c000003e syscall=46 success=yes exit=19092 a0=3 a1=7fff82266b10 a2=0 a3=7fff82266afc items=0 ppid=2914 pid=4726 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:26.730000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:35:26.805692 systemd-networkd[1515]: cali1d4a663e250: Gained IPv6LL Jan 22 00:35:26.867653 systemd-networkd[1515]: cali9c08672e1b3: Gained IPv6LL Jan 22 00:35:27.424066 containerd[1629]: time="2026-01-22T00:35:27.423892699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74c55dd4c7-ljvfs,Uid:c579fa7a-8ee2-4338-b81e-6fc1959a328f,Namespace:calico-system,Attempt:0,}" Jan 22 00:35:27.572304 systemd-networkd[1515]: cali118dd7f1911: Gained IPv6LL Jan 22 00:35:27.580221 systemd-networkd[1515]: cali4e73ce1c224: Link UP Jan 22 00:35:27.580729 systemd-networkd[1515]: cali4e73ce1c224: Gained carrier Jan 22 00:35:27.597803 containerd[1629]: 2026-01-22 00:35:27.466 [INFO][4744] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 22 00:35:27.597803 containerd[1629]: 2026-01-22 00:35:27.477 [INFO][4744] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--232--4--171-k8s-calico--kube--controllers--74c55dd4c7--ljvfs-eth0 calico-kube-controllers-74c55dd4c7- calico-system c579fa7a-8ee2-4338-b81e-6fc1959a328f 823 0 2026-01-22 00:35:05 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:74c55dd4c7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 172-232-4-171 calico-kube-controllers-74c55dd4c7-ljvfs eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali4e73ce1c224 [] [] }} ContainerID="a8db5acbb790565c58a0118f74d20079826a0a93ec18f4c994be7b29d59d2826" Namespace="calico-system" Pod="calico-kube-controllers-74c55dd4c7-ljvfs" WorkloadEndpoint="172--232--4--171-k8s-calico--kube--controllers--74c55dd4c7--ljvfs-" Jan 22 00:35:27.597803 containerd[1629]: 2026-01-22 00:35:27.477 [INFO][4744] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a8db5acbb790565c58a0118f74d20079826a0a93ec18f4c994be7b29d59d2826" Namespace="calico-system" Pod="calico-kube-controllers-74c55dd4c7-ljvfs" WorkloadEndpoint="172--232--4--171-k8s-calico--kube--controllers--74c55dd4c7--ljvfs-eth0" Jan 22 00:35:27.597803 containerd[1629]: 2026-01-22 00:35:27.522 [INFO][4760] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a8db5acbb790565c58a0118f74d20079826a0a93ec18f4c994be7b29d59d2826" HandleID="k8s-pod-network.a8db5acbb790565c58a0118f74d20079826a0a93ec18f4c994be7b29d59d2826" Workload="172--232--4--171-k8s-calico--kube--controllers--74c55dd4c7--ljvfs-eth0" Jan 22 00:35:27.597803 containerd[1629]: 2026-01-22 00:35:27.522 [INFO][4760] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a8db5acbb790565c58a0118f74d20079826a0a93ec18f4c994be7b29d59d2826" HandleID="k8s-pod-network.a8db5acbb790565c58a0118f74d20079826a0a93ec18f4c994be7b29d59d2826" Workload="172--232--4--171-k8s-calico--kube--controllers--74c55dd4c7--ljvfs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000323960), Attrs:map[string]string{"namespace":"calico-system", "node":"172-232-4-171", "pod":"calico-kube-controllers-74c55dd4c7-ljvfs", "timestamp":"2026-01-22 00:35:27.522414327 +0000 UTC"}, Hostname:"172-232-4-171", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 22 00:35:27.597803 containerd[1629]: 2026-01-22 00:35:27.522 [INFO][4760] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 22 00:35:27.597803 containerd[1629]: 2026-01-22 00:35:27.522 [INFO][4760] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 22 00:35:27.597803 containerd[1629]: 2026-01-22 00:35:27.522 [INFO][4760] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-232-4-171' Jan 22 00:35:27.597803 containerd[1629]: 2026-01-22 00:35:27.529 [INFO][4760] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a8db5acbb790565c58a0118f74d20079826a0a93ec18f4c994be7b29d59d2826" host="172-232-4-171" Jan 22 00:35:27.597803 containerd[1629]: 2026-01-22 00:35:27.533 [INFO][4760] ipam/ipam.go 394: Looking up existing affinities for host host="172-232-4-171" Jan 22 00:35:27.597803 containerd[1629]: 2026-01-22 00:35:27.538 [INFO][4760] ipam/ipam.go 511: Trying affinity for 192.168.71.64/26 host="172-232-4-171" Jan 22 00:35:27.597803 containerd[1629]: 2026-01-22 00:35:27.540 [INFO][4760] ipam/ipam.go 158: Attempting to load block cidr=192.168.71.64/26 host="172-232-4-171" Jan 22 00:35:27.597803 containerd[1629]: 2026-01-22 00:35:27.545 [INFO][4760] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.71.64/26 host="172-232-4-171" Jan 22 00:35:27.597803 containerd[1629]: 2026-01-22 00:35:27.545 [INFO][4760] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.71.64/26 handle="k8s-pod-network.a8db5acbb790565c58a0118f74d20079826a0a93ec18f4c994be7b29d59d2826" host="172-232-4-171" Jan 22 00:35:27.597803 containerd[1629]: 2026-01-22 00:35:27.547 [INFO][4760] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a8db5acbb790565c58a0118f74d20079826a0a93ec18f4c994be7b29d59d2826 Jan 22 00:35:27.597803 containerd[1629]: 2026-01-22 00:35:27.557 [INFO][4760] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.71.64/26 handle="k8s-pod-network.a8db5acbb790565c58a0118f74d20079826a0a93ec18f4c994be7b29d59d2826" host="172-232-4-171" Jan 22 00:35:27.597803 containerd[1629]: 2026-01-22 00:35:27.565 [INFO][4760] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.71.72/26] block=192.168.71.64/26 handle="k8s-pod-network.a8db5acbb790565c58a0118f74d20079826a0a93ec18f4c994be7b29d59d2826" host="172-232-4-171" Jan 22 00:35:27.597803 containerd[1629]: 2026-01-22 00:35:27.565 [INFO][4760] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.71.72/26] handle="k8s-pod-network.a8db5acbb790565c58a0118f74d20079826a0a93ec18f4c994be7b29d59d2826" host="172-232-4-171" Jan 22 00:35:27.597803 containerd[1629]: 2026-01-22 00:35:27.565 [INFO][4760] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 22 00:35:27.597803 containerd[1629]: 2026-01-22 00:35:27.565 [INFO][4760] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.71.72/26] IPv6=[] ContainerID="a8db5acbb790565c58a0118f74d20079826a0a93ec18f4c994be7b29d59d2826" HandleID="k8s-pod-network.a8db5acbb790565c58a0118f74d20079826a0a93ec18f4c994be7b29d59d2826" Workload="172--232--4--171-k8s-calico--kube--controllers--74c55dd4c7--ljvfs-eth0" Jan 22 00:35:27.598430 containerd[1629]: 2026-01-22 00:35:27.568 [INFO][4744] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a8db5acbb790565c58a0118f74d20079826a0a93ec18f4c994be7b29d59d2826" Namespace="calico-system" Pod="calico-kube-controllers-74c55dd4c7-ljvfs" WorkloadEndpoint="172--232--4--171-k8s-calico--kube--controllers--74c55dd4c7--ljvfs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--4--171-k8s-calico--kube--controllers--74c55dd4c7--ljvfs-eth0", GenerateName:"calico-kube-controllers-74c55dd4c7-", Namespace:"calico-system", SelfLink:"", UID:"c579fa7a-8ee2-4338-b81e-6fc1959a328f", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2026, time.January, 22, 0, 35, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74c55dd4c7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-4-171", ContainerID:"", Pod:"calico-kube-controllers-74c55dd4c7-ljvfs", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.71.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4e73ce1c224", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 22 00:35:27.598430 containerd[1629]: 2026-01-22 00:35:27.568 [INFO][4744] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.71.72/32] ContainerID="a8db5acbb790565c58a0118f74d20079826a0a93ec18f4c994be7b29d59d2826" Namespace="calico-system" Pod="calico-kube-controllers-74c55dd4c7-ljvfs" WorkloadEndpoint="172--232--4--171-k8s-calico--kube--controllers--74c55dd4c7--ljvfs-eth0" Jan 22 00:35:27.598430 containerd[1629]: 2026-01-22 00:35:27.568 [INFO][4744] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4e73ce1c224 ContainerID="a8db5acbb790565c58a0118f74d20079826a0a93ec18f4c994be7b29d59d2826" Namespace="calico-system" Pod="calico-kube-controllers-74c55dd4c7-ljvfs" WorkloadEndpoint="172--232--4--171-k8s-calico--kube--controllers--74c55dd4c7--ljvfs-eth0" Jan 22 00:35:27.598430 containerd[1629]: 2026-01-22 00:35:27.579 [INFO][4744] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a8db5acbb790565c58a0118f74d20079826a0a93ec18f4c994be7b29d59d2826" Namespace="calico-system" Pod="calico-kube-controllers-74c55dd4c7-ljvfs" WorkloadEndpoint="172--232--4--171-k8s-calico--kube--controllers--74c55dd4c7--ljvfs-eth0" Jan 22 00:35:27.598430 containerd[1629]: 2026-01-22 00:35:27.581 [INFO][4744] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a8db5acbb790565c58a0118f74d20079826a0a93ec18f4c994be7b29d59d2826" Namespace="calico-system" Pod="calico-kube-controllers-74c55dd4c7-ljvfs" WorkloadEndpoint="172--232--4--171-k8s-calico--kube--controllers--74c55dd4c7--ljvfs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--4--171-k8s-calico--kube--controllers--74c55dd4c7--ljvfs-eth0", GenerateName:"calico-kube-controllers-74c55dd4c7-", Namespace:"calico-system", SelfLink:"", UID:"c579fa7a-8ee2-4338-b81e-6fc1959a328f", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2026, time.January, 22, 0, 35, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74c55dd4c7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-4-171", ContainerID:"a8db5acbb790565c58a0118f74d20079826a0a93ec18f4c994be7b29d59d2826", Pod:"calico-kube-controllers-74c55dd4c7-ljvfs", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.71.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4e73ce1c224", MAC:"fe:ed:ad:33:31:d9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 22 00:35:27.598430 containerd[1629]: 2026-01-22 00:35:27.593 [INFO][4744] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a8db5acbb790565c58a0118f74d20079826a0a93ec18f4c994be7b29d59d2826" Namespace="calico-system" Pod="calico-kube-controllers-74c55dd4c7-ljvfs" WorkloadEndpoint="172--232--4--171-k8s-calico--kube--controllers--74c55dd4c7--ljvfs-eth0" Jan 22 00:35:27.629057 containerd[1629]: time="2026-01-22T00:35:27.629016018Z" level=info msg="connecting to shim a8db5acbb790565c58a0118f74d20079826a0a93ec18f4c994be7b29d59d2826" address="unix:///run/containerd/s/d4a968a2a5b28cf66c42a821effbb0f9a726acb606786c1ae7d2ee6de83eaea8" namespace=k8s.io protocol=ttrpc version=3 Jan 22 00:35:27.655026 kubelet[2805]: E0122 00:35:27.653815 2805 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 22 00:35:27.655026 kubelet[2805]: E0122 00:35:27.653966 2805 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 22 00:35:27.655026 kubelet[2805]: E0122 00:35:27.654929 2805 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wm244" podUID="ae45d206-785e-4662-9efc-4b0987941483" Jan 22 00:35:27.655901 kubelet[2805]: E0122 00:35:27.655693 2805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-bbb7b878c-n8l2v" podUID="53fd5176-64d2-4a70-9167-8081a837fe6e" Jan 22 00:35:27.669711 systemd[1]: Started cri-containerd-a8db5acbb790565c58a0118f74d20079826a0a93ec18f4c994be7b29d59d2826.scope - libcontainer container a8db5acbb790565c58a0118f74d20079826a0a93ec18f4c994be7b29d59d2826. Jan 22 00:35:27.692000 audit: BPF prog-id=232 op=LOAD Jan 22 00:35:27.693000 audit: BPF prog-id=233 op=LOAD Jan 22 00:35:27.693000 audit[4794]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=4783 pid=4794 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:27.693000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138646235616362623739303536356335386130313138663734643230 Jan 22 00:35:27.693000 audit: BPF prog-id=233 op=UNLOAD Jan 22 00:35:27.693000 audit[4794]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4783 pid=4794 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:27.693000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138646235616362623739303536356335386130313138663734643230 Jan 22 00:35:27.694000 audit: BPF prog-id=234 op=LOAD Jan 22 00:35:27.694000 audit[4794]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=4783 pid=4794 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:27.694000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138646235616362623739303536356335386130313138663734643230 Jan 22 00:35:27.694000 audit: BPF prog-id=235 op=LOAD Jan 22 00:35:27.694000 audit[4794]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=4783 pid=4794 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:27.694000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138646235616362623739303536356335386130313138663734643230 Jan 22 00:35:27.694000 audit: BPF prog-id=235 op=UNLOAD Jan 22 00:35:27.694000 audit[4794]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4783 pid=4794 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:27.694000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138646235616362623739303536356335386130313138663734643230 Jan 22 00:35:27.694000 audit: BPF prog-id=234 op=UNLOAD Jan 22 00:35:27.694000 audit[4794]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4783 pid=4794 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:27.694000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138646235616362623739303536356335386130313138663734643230 Jan 22 00:35:27.694000 audit: BPF prog-id=236 op=LOAD Jan 22 00:35:27.694000 audit[4794]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=4783 pid=4794 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:27.694000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138646235616362623739303536356335386130313138663734643230 Jan 22 00:35:27.734345 containerd[1629]: time="2026-01-22T00:35:27.734275707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74c55dd4c7-ljvfs,Uid:c579fa7a-8ee2-4338-b81e-6fc1959a328f,Namespace:calico-system,Attempt:0,} returns sandbox id \"a8db5acbb790565c58a0118f74d20079826a0a93ec18f4c994be7b29d59d2826\"" Jan 22 00:35:27.736279 containerd[1629]: time="2026-01-22T00:35:27.736260068Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 22 00:35:27.873598 containerd[1629]: time="2026-01-22T00:35:27.873554460Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:35:27.874410 containerd[1629]: time="2026-01-22T00:35:27.874378923Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 22 00:35:27.874464 containerd[1629]: time="2026-01-22T00:35:27.874435357Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 22 00:35:27.874637 kubelet[2805]: E0122 00:35:27.874561 2805 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 22 00:35:27.874637 kubelet[2805]: E0122 00:35:27.874623 2805 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 22 00:35:27.874738 kubelet[2805]: E0122 00:35:27.874681 2805 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-74c55dd4c7-ljvfs_calico-system(c579fa7a-8ee2-4338-b81e-6fc1959a328f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 22 00:35:27.874738 kubelet[2805]: E0122 00:35:27.874711 2805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74c55dd4c7-ljvfs" podUID="c579fa7a-8ee2-4338-b81e-6fc1959a328f" Jan 22 00:35:28.596831 systemd-networkd[1515]: cali4e73ce1c224: Gained IPv6LL Jan 22 00:35:28.659112 kubelet[2805]: E0122 00:35:28.659036 2805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74c55dd4c7-ljvfs" podUID="c579fa7a-8ee2-4338-b81e-6fc1959a328f" Jan 22 00:35:29.661073 kubelet[2805]: E0122 00:35:29.660836 2805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74c55dd4c7-ljvfs" podUID="c579fa7a-8ee2-4338-b81e-6fc1959a328f" Jan 22 00:35:33.457710 kubelet[2805]: I0122 00:35:33.457657 2805 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 00:35:33.459038 kubelet[2805]: E0122 00:35:33.459010 2805 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 22 00:35:33.505000 audit[4927]: NETFILTER_CFG table=filter:125 family=2 entries=15 op=nft_register_rule pid=4927 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:35:33.508316 kernel: kauditd_printk_skb: 206 callbacks suppressed Jan 22 00:35:33.508396 kernel: audit: type=1325 audit(1769042133.505:679): table=filter:125 family=2 entries=15 op=nft_register_rule pid=4927 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:35:33.520541 kernel: audit: type=1300 audit(1769042133.505:679): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffce2fc8db0 a2=0 a3=7ffce2fc8d9c items=0 ppid=2914 pid=4927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:33.505000 audit[4927]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffce2fc8db0 a2=0 a3=7ffce2fc8d9c items=0 ppid=2914 pid=4927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:33.525548 kernel: audit: type=1327 audit(1769042133.505:679): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:35:33.505000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:35:33.514000 audit[4927]: NETFILTER_CFG table=nat:126 family=2 entries=25 op=nft_register_chain pid=4927 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:35:33.529989 kernel: audit: type=1325 audit(1769042133.514:680): table=nat:126 family=2 entries=25 op=nft_register_chain pid=4927 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:35:33.514000 audit[4927]: SYSCALL arch=c000003e syscall=46 success=yes exit=8580 a0=3 a1=7ffce2fc8db0 a2=0 a3=7ffce2fc8d9c items=0 ppid=2914 pid=4927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:33.531961 kernel: audit: type=1300 audit(1769042133.514:680): arch=c000003e syscall=46 success=yes exit=8580 a0=3 a1=7ffce2fc8db0 a2=0 a3=7ffce2fc8d9c items=0 ppid=2914 pid=4927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:33.514000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:35:33.539662 kernel: audit: type=1327 audit(1769042133.514:680): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:35:33.671699 kubelet[2805]: E0122 00:35:33.671664 2805 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 22 00:35:34.216000 audit: BPF prog-id=237 op=LOAD Jan 22 00:35:34.220535 kernel: audit: type=1334 audit(1769042134.216:681): prog-id=237 op=LOAD Jan 22 00:35:34.230290 kernel: audit: type=1300 audit(1769042134.216:681): arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd50d0a480 a2=98 a3=1fffffffffffffff items=0 ppid=4953 pid=4983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:34.216000 audit[4983]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd50d0a480 a2=98 a3=1fffffffffffffff items=0 ppid=4953 pid=4983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:34.216000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 22 00:35:34.244881 kernel: audit: type=1327 audit(1769042134.216:681): proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 22 00:35:34.216000 audit: BPF prog-id=237 op=UNLOAD Jan 22 00:35:34.253539 kernel: audit: type=1334 audit(1769042134.216:682): prog-id=237 op=UNLOAD Jan 22 00:35:34.216000 audit[4983]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd50d0a450 a3=0 items=0 ppid=4953 pid=4983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:34.216000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 22 00:35:34.216000 audit: BPF prog-id=238 op=LOAD Jan 22 00:35:34.216000 audit[4983]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd50d0a360 a2=94 a3=3 items=0 ppid=4953 pid=4983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:34.216000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 22 00:35:34.216000 audit: BPF prog-id=238 op=UNLOAD Jan 22 00:35:34.216000 audit[4983]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd50d0a360 a2=94 a3=3 items=0 ppid=4953 pid=4983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:34.216000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 22 00:35:34.216000 audit: BPF prog-id=239 op=LOAD Jan 22 00:35:34.216000 audit[4983]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd50d0a3a0 a2=94 a3=7ffd50d0a580 items=0 ppid=4953 pid=4983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:34.216000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 22 00:35:34.216000 audit: BPF prog-id=239 op=UNLOAD Jan 22 00:35:34.216000 audit[4983]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd50d0a3a0 a2=94 a3=7ffd50d0a580 items=0 ppid=4953 pid=4983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:34.216000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 22 00:35:34.218000 audit: BPF prog-id=240 op=LOAD Jan 22 00:35:34.218000 audit[4984]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff952b37e0 a2=98 a3=3 items=0 ppid=4953 pid=4984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:34.218000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 00:35:34.218000 audit: BPF prog-id=240 op=UNLOAD Jan 22 00:35:34.218000 audit[4984]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff952b37b0 a3=0 items=0 ppid=4953 pid=4984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:34.218000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 00:35:34.218000 audit: BPF prog-id=241 op=LOAD Jan 22 00:35:34.218000 audit[4984]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff952b35d0 a2=94 a3=54428f items=0 ppid=4953 pid=4984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:34.218000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 00:35:34.218000 audit: BPF prog-id=241 op=UNLOAD Jan 22 00:35:34.218000 audit[4984]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff952b35d0 a2=94 a3=54428f items=0 ppid=4953 pid=4984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:34.218000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 00:35:34.218000 audit: BPF prog-id=242 op=LOAD Jan 22 00:35:34.218000 audit[4984]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff952b3600 a2=94 a3=2 items=0 ppid=4953 pid=4984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:34.218000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 00:35:34.218000 audit: BPF prog-id=242 op=UNLOAD Jan 22 00:35:34.218000 audit[4984]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff952b3600 a2=0 a3=2 items=0 ppid=4953 pid=4984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:34.218000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 00:35:34.416000 audit: BPF prog-id=243 op=LOAD Jan 22 00:35:34.416000 audit[4984]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff952b34c0 a2=94 a3=1 items=0 ppid=4953 pid=4984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:34.416000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 00:35:34.416000 audit: BPF prog-id=243 op=UNLOAD Jan 22 00:35:34.416000 audit[4984]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff952b34c0 a2=94 a3=1 items=0 ppid=4953 pid=4984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:34.416000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 00:35:34.425000 audit: BPF prog-id=244 op=LOAD Jan 22 00:35:34.425000 audit[4984]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff952b34b0 a2=94 a3=4 items=0 ppid=4953 pid=4984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:34.425000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 00:35:34.425000 audit: BPF prog-id=244 op=UNLOAD Jan 22 00:35:34.425000 audit[4984]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff952b34b0 a2=0 a3=4 items=0 ppid=4953 pid=4984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:34.425000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 00:35:34.426000 audit: BPF prog-id=245 op=LOAD Jan 22 00:35:34.426000 audit[4984]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff952b3310 a2=94 a3=5 items=0 ppid=4953 pid=4984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:34.426000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 00:35:34.426000 audit: BPF prog-id=245 op=UNLOAD Jan 22 00:35:34.426000 audit[4984]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff952b3310 a2=0 a3=5 items=0 ppid=4953 pid=4984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:34.426000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 00:35:34.426000 audit: BPF prog-id=246 op=LOAD Jan 22 00:35:34.426000 audit[4984]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff952b3530 a2=94 a3=6 items=0 ppid=4953 pid=4984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:34.426000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 00:35:34.426000 audit: BPF prog-id=246 op=UNLOAD Jan 22 00:35:34.426000 audit[4984]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff952b3530 a2=0 a3=6 items=0 ppid=4953 pid=4984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:34.426000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 00:35:34.426000 audit: BPF prog-id=247 op=LOAD Jan 22 00:35:34.426000 audit[4984]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff952b2ce0 a2=94 a3=88 items=0 ppid=4953 pid=4984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:34.426000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 00:35:34.426000 audit: BPF prog-id=248 op=LOAD Jan 22 00:35:34.426000 audit[4984]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7fff952b2b60 a2=94 a3=2 items=0 ppid=4953 pid=4984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:34.426000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 00:35:34.426000 audit: BPF prog-id=248 op=UNLOAD Jan 22 00:35:34.426000 audit[4984]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7fff952b2b90 a2=0 a3=7fff952b2c90 items=0 ppid=4953 pid=4984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:34.426000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 00:35:34.427000 audit: BPF prog-id=247 op=UNLOAD Jan 22 00:35:34.427000 audit[4984]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=dc21d10 a2=0 a3=43274d40dc3a9b1e items=0 ppid=4953 pid=4984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:34.427000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 00:35:34.441000 audit: BPF prog-id=249 op=LOAD Jan 22 00:35:34.441000 audit[4989]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffeefcb37e0 a2=98 a3=1999999999999999 items=0 ppid=4953 pid=4989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:34.441000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 22 00:35:34.442000 audit: BPF prog-id=249 op=UNLOAD Jan 22 00:35:34.442000 audit[4989]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffeefcb37b0 a3=0 items=0 ppid=4953 pid=4989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:34.442000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 22 00:35:34.442000 audit: BPF prog-id=250 op=LOAD Jan 22 00:35:34.442000 audit[4989]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffeefcb36c0 a2=94 a3=ffff items=0 ppid=4953 pid=4989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:34.442000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 22 00:35:34.442000 audit: BPF prog-id=250 op=UNLOAD Jan 22 00:35:34.442000 audit[4989]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffeefcb36c0 a2=94 a3=ffff items=0 ppid=4953 pid=4989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:34.442000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 22 00:35:34.442000 audit: BPF prog-id=251 op=LOAD Jan 22 00:35:34.442000 audit[4989]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffeefcb3700 a2=94 a3=7ffeefcb38e0 items=0 ppid=4953 pid=4989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:34.442000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 22 00:35:34.442000 audit: BPF prog-id=251 op=UNLOAD Jan 22 00:35:34.442000 audit[4989]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffeefcb3700 a2=94 a3=7ffeefcb38e0 items=0 ppid=4953 pid=4989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:34.442000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 22 00:35:34.520704 systemd-networkd[1515]: vxlan.calico: Link UP Jan 22 00:35:34.520719 systemd-networkd[1515]: vxlan.calico: Gained carrier Jan 22 00:35:34.555000 audit: BPF prog-id=252 op=LOAD Jan 22 00:35:34.555000 audit[5015]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff1649a620 a2=98 a3=0 items=0 ppid=4953 pid=5015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:34.555000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 22 00:35:34.555000 audit: BPF prog-id=252 op=UNLOAD Jan 22 00:35:34.555000 audit[5015]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff1649a5f0 a3=0 items=0 ppid=4953 pid=5015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:34.555000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 22 00:35:34.555000 audit: BPF prog-id=253 op=LOAD Jan 22 00:35:34.555000 audit[5015]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff1649a430 a2=94 a3=54428f items=0 ppid=4953 pid=5015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:34.555000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 22 00:35:34.555000 audit: BPF prog-id=253 op=UNLOAD Jan 22 00:35:34.555000 audit[5015]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff1649a430 a2=94 a3=54428f items=0 ppid=4953 pid=5015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:34.555000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 22 00:35:34.555000 audit: BPF prog-id=254 op=LOAD Jan 22 00:35:34.555000 audit[5015]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff1649a460 a2=94 a3=2 items=0 ppid=4953 pid=5015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:34.555000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 22 00:35:34.555000 audit: BPF prog-id=254 op=UNLOAD Jan 22 00:35:34.555000 audit[5015]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff1649a460 a2=0 a3=2 items=0 ppid=4953 pid=5015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:34.555000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 22 00:35:34.556000 audit: BPF prog-id=255 op=LOAD Jan 22 00:35:34.556000 audit[5015]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff1649a210 a2=94 a3=4 items=0 ppid=4953 pid=5015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:34.556000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 22 00:35:34.556000 audit: BPF prog-id=255 op=UNLOAD Jan 22 00:35:34.556000 audit[5015]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff1649a210 a2=94 a3=4 items=0 ppid=4953 pid=5015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:34.556000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 22 00:35:34.556000 audit: BPF prog-id=256 op=LOAD Jan 22 00:35:34.556000 audit[5015]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff1649a310 a2=94 a3=7fff1649a490 items=0 ppid=4953 pid=5015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:34.556000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 22 00:35:34.556000 audit: BPF prog-id=256 op=UNLOAD Jan 22 00:35:34.556000 audit[5015]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff1649a310 a2=0 a3=7fff1649a490 items=0 ppid=4953 pid=5015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:34.556000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 22 00:35:34.557000 audit: BPF prog-id=257 op=LOAD Jan 22 00:35:34.557000 audit[5015]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff16499a40 a2=94 a3=2 items=0 ppid=4953 pid=5015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:34.557000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 22 00:35:34.557000 audit: BPF prog-id=257 op=UNLOAD Jan 22 00:35:34.557000 audit[5015]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff16499a40 a2=0 a3=2 items=0 ppid=4953 pid=5015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:34.557000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 22 00:35:34.557000 audit: BPF prog-id=258 op=LOAD Jan 22 00:35:34.557000 audit[5015]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff16499b40 a2=94 a3=30 items=0 ppid=4953 pid=5015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:34.557000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 22 00:35:34.566000 audit: BPF prog-id=259 op=LOAD Jan 22 00:35:34.566000 audit[5019]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffda23bf820 a2=98 a3=0 items=0 ppid=4953 pid=5019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:34.566000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 00:35:34.566000 audit: BPF prog-id=259 op=UNLOAD Jan 22 00:35:34.566000 audit[5019]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffda23bf7f0 a3=0 items=0 ppid=4953 pid=5019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:34.566000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 00:35:34.566000 audit: BPF prog-id=260 op=LOAD Jan 22 00:35:34.566000 audit[5019]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffda23bf610 a2=94 a3=54428f items=0 ppid=4953 pid=5019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:34.566000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 00:35:34.567000 audit: BPF prog-id=260 op=UNLOAD Jan 22 00:35:34.567000 audit[5019]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffda23bf610 a2=94 a3=54428f items=0 ppid=4953 pid=5019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:34.567000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 00:35:34.567000 audit: BPF prog-id=261 op=LOAD Jan 22 00:35:34.567000 audit[5019]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffda23bf640 a2=94 a3=2 items=0 ppid=4953 pid=5019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:34.567000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 00:35:34.567000 audit: BPF prog-id=261 op=UNLOAD Jan 22 00:35:34.567000 audit[5019]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffda23bf640 a2=0 a3=2 items=0 ppid=4953 pid=5019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:34.567000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 00:35:34.748000 audit: BPF prog-id=262 op=LOAD Jan 22 00:35:34.748000 audit[5019]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffda23bf500 a2=94 a3=1 items=0 ppid=4953 pid=5019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:34.748000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 00:35:34.749000 audit: BPF prog-id=262 op=UNLOAD Jan 22 00:35:34.749000 audit[5019]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffda23bf500 a2=94 a3=1 items=0 ppid=4953 pid=5019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:34.749000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 00:35:34.758000 audit: BPF prog-id=263 op=LOAD Jan 22 00:35:34.758000 audit[5019]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffda23bf4f0 a2=94 a3=4 items=0 ppid=4953 pid=5019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:34.758000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 00:35:34.758000 audit: BPF prog-id=263 op=UNLOAD Jan 22 00:35:34.758000 audit[5019]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffda23bf4f0 a2=0 a3=4 items=0 ppid=4953 pid=5019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:34.758000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 00:35:34.759000 audit: BPF prog-id=264 op=LOAD Jan 22 00:35:34.759000 audit[5019]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffda23bf350 a2=94 a3=5 items=0 ppid=4953 pid=5019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:34.759000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 00:35:34.759000 audit: BPF prog-id=264 op=UNLOAD Jan 22 00:35:34.759000 audit[5019]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffda23bf350 a2=0 a3=5 items=0 ppid=4953 pid=5019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:34.759000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 00:35:34.759000 audit: BPF prog-id=265 op=LOAD Jan 22 00:35:34.759000 audit[5019]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffda23bf570 a2=94 a3=6 items=0 ppid=4953 pid=5019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:34.759000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 00:35:34.759000 audit: BPF prog-id=265 op=UNLOAD Jan 22 00:35:34.759000 audit[5019]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffda23bf570 a2=0 a3=6 items=0 ppid=4953 pid=5019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:34.759000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 00:35:34.759000 audit: BPF prog-id=266 op=LOAD Jan 22 00:35:34.759000 audit[5019]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffda23bed20 a2=94 a3=88 items=0 ppid=4953 pid=5019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:34.759000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 00:35:34.759000 audit: BPF prog-id=267 op=LOAD Jan 22 00:35:34.759000 audit[5019]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffda23beba0 a2=94 a3=2 items=0 ppid=4953 pid=5019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:34.759000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 00:35:34.759000 audit: BPF prog-id=267 op=UNLOAD Jan 22 00:35:34.759000 audit[5019]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffda23bebd0 a2=0 a3=7ffda23becd0 items=0 ppid=4953 pid=5019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:34.759000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 00:35:34.760000 audit: BPF prog-id=266 op=UNLOAD Jan 22 00:35:34.760000 audit[5019]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=6f2fd10 a2=0 a3=4e121624cfa636f0 items=0 ppid=4953 pid=5019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:34.760000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 00:35:34.768000 audit: BPF prog-id=258 op=UNLOAD Jan 22 00:35:34.768000 audit[4953]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c00079a5c0 a2=0 a3=0 items=0 ppid=3954 pid=4953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:34.768000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 22 00:35:34.857000 audit[5050]: NETFILTER_CFG table=mangle:127 family=2 entries=16 op=nft_register_chain pid=5050 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 22 00:35:34.857000 audit[5050]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffd4de61be0 a2=0 a3=7ffd4de61bcc items=0 ppid=4953 pid=5050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:34.857000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 22 00:35:34.863000 audit[5049]: NETFILTER_CFG table=raw:128 family=2 entries=21 op=nft_register_chain pid=5049 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 22 00:35:34.863000 audit[5049]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7fff089cdcd0 a2=0 a3=7fff089cdcbc items=0 ppid=4953 pid=5049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:34.863000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 22 00:35:34.866000 audit[5052]: NETFILTER_CFG table=nat:129 family=2 entries=15 op=nft_register_chain pid=5052 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 22 00:35:34.866000 audit[5052]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffe6ea40a50 a2=0 a3=7ffe6ea40a3c items=0 ppid=4953 pid=5052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:34.866000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 22 00:35:34.879000 audit[5055]: NETFILTER_CFG table=filter:130 family=2 entries=321 op=nft_register_chain pid=5055 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 22 00:35:34.879000 audit[5055]: SYSCALL arch=c000003e syscall=46 success=yes exit=190616 a0=3 a1=7ffc08a2a500 a2=0 a3=7ffc08a2a4ec items=0 ppid=4953 pid=5055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:34.879000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 22 00:35:35.424704 containerd[1629]: time="2026-01-22T00:35:35.424648615Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 22 00:35:35.598865 containerd[1629]: time="2026-01-22T00:35:35.598785098Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:35:35.599878 containerd[1629]: time="2026-01-22T00:35:35.599805392Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 22 00:35:35.599955 containerd[1629]: time="2026-01-22T00:35:35.599880527Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 22 00:35:35.600157 kubelet[2805]: E0122 00:35:35.600099 2805 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 22 00:35:35.600622 kubelet[2805]: E0122 00:35:35.600160 2805 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 22 00:35:35.600622 kubelet[2805]: E0122 00:35:35.600250 2805 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-687c765d98-jfjzf_calico-system(8ecd20e5-2e31-4297-a10a-ea50808543e7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 22 00:35:35.602344 containerd[1629]: time="2026-01-22T00:35:35.602243024Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 22 00:35:35.735329 containerd[1629]: time="2026-01-22T00:35:35.735247131Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:35:35.736347 containerd[1629]: time="2026-01-22T00:35:35.736301918Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 22 00:35:35.736444 containerd[1629]: time="2026-01-22T00:35:35.736313148Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 22 00:35:35.736718 kubelet[2805]: E0122 00:35:35.736656 2805 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 22 00:35:35.736825 kubelet[2805]: E0122 00:35:35.736733 2805 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 22 00:35:35.736964 kubelet[2805]: E0122 00:35:35.736871 2805 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-687c765d98-jfjzf_calico-system(8ecd20e5-2e31-4297-a10a-ea50808543e7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 22 00:35:35.736964 kubelet[2805]: E0122 00:35:35.736931 2805 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-687c765d98-jfjzf" podUID="8ecd20e5-2e31-4297-a10a-ea50808543e7" Jan 22 00:35:36.468316 systemd-networkd[1515]: vxlan.calico: Gained IPv6LL Jan 22 00:35:37.422526 containerd[1629]: time="2026-01-22T00:35:37.422460259Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 22 00:35:37.559946 containerd[1629]: time="2026-01-22T00:35:37.559878614Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:35:37.561062 containerd[1629]: time="2026-01-22T00:35:37.561029373Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 22 00:35:37.561218 containerd[1629]: time="2026-01-22T00:35:37.561094058Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 22 00:35:37.561276 kubelet[2805]: E0122 00:35:37.561210 2805 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 22 00:35:37.561276 kubelet[2805]: E0122 00:35:37.561245 2805 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 22 00:35:37.562601 kubelet[2805]: E0122 00:35:37.561355 2805 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-bbb7b878c-c279j_calico-apiserver(ac5cf267-1601-4ae5-91e1-dc1496ea695f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 22 00:35:37.562601 kubelet[2805]: E0122 00:35:37.561388 2805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-bbb7b878c-c279j" podUID="ac5cf267-1601-4ae5-91e1-dc1496ea695f" Jan 22 00:35:39.422560 containerd[1629]: time="2026-01-22T00:35:39.422281236Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 22 00:35:39.551213 containerd[1629]: time="2026-01-22T00:35:39.551129417Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:35:39.552609 containerd[1629]: time="2026-01-22T00:35:39.552549198Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 22 00:35:39.552695 containerd[1629]: time="2026-01-22T00:35:39.552619333Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 22 00:35:39.552830 kubelet[2805]: E0122 00:35:39.552781 2805 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 22 00:35:39.553254 kubelet[2805]: E0122 00:35:39.552836 2805 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 22 00:35:39.553254 kubelet[2805]: E0122 00:35:39.552892 2805 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-bbb7b878c-n8l2v_calico-apiserver(53fd5176-64d2-4a70-9167-8081a837fe6e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 22 00:35:39.553254 kubelet[2805]: E0122 00:35:39.552919 2805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-bbb7b878c-n8l2v" podUID="53fd5176-64d2-4a70-9167-8081a837fe6e" Jan 22 00:35:41.423450 containerd[1629]: time="2026-01-22T00:35:41.423309564Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 22 00:35:41.585593 containerd[1629]: time="2026-01-22T00:35:41.585504765Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:35:41.586650 containerd[1629]: time="2026-01-22T00:35:41.586601647Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 22 00:35:41.586690 containerd[1629]: time="2026-01-22T00:35:41.586660250Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 22 00:35:41.586820 kubelet[2805]: E0122 00:35:41.586782 2805 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 22 00:35:41.587293 kubelet[2805]: E0122 00:35:41.586825 2805 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 22 00:35:41.587293 kubelet[2805]: E0122 00:35:41.586898 2805 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-wm244_calico-system(ae45d206-785e-4662-9efc-4b0987941483): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 22 00:35:41.588113 containerd[1629]: time="2026-01-22T00:35:41.588086260Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 22 00:35:41.729621 containerd[1629]: time="2026-01-22T00:35:41.729590802Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:35:41.730418 containerd[1629]: time="2026-01-22T00:35:41.730370386Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 22 00:35:41.730566 containerd[1629]: time="2026-01-22T00:35:41.730390347Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 22 00:35:41.730651 kubelet[2805]: E0122 00:35:41.730556 2805 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 22 00:35:41.730651 kubelet[2805]: E0122 00:35:41.730589 2805 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 22 00:35:41.730803 kubelet[2805]: E0122 00:35:41.730654 2805 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-wm244_calico-system(ae45d206-785e-4662-9efc-4b0987941483): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 22 00:35:41.730803 kubelet[2805]: E0122 00:35:41.730710 2805 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wm244" podUID="ae45d206-785e-4662-9efc-4b0987941483" Jan 22 00:35:42.426417 containerd[1629]: time="2026-01-22T00:35:42.426326962Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 22 00:35:42.564294 containerd[1629]: time="2026-01-22T00:35:42.564237217Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:35:42.565731 containerd[1629]: time="2026-01-22T00:35:42.565597033Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 22 00:35:42.565731 containerd[1629]: time="2026-01-22T00:35:42.565655206Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 22 00:35:42.565966 kubelet[2805]: E0122 00:35:42.565905 2805 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 22 00:35:42.566021 kubelet[2805]: E0122 00:35:42.565997 2805 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 22 00:35:42.566167 kubelet[2805]: E0122 00:35:42.566138 2805 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-jv75z_calico-system(7058f0b0-e750-4a4f-832e-cf58713e25a5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 22 00:35:42.566646 kubelet[2805]: E0122 00:35:42.566188 2805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-jv75z" podUID="7058f0b0-e750-4a4f-832e-cf58713e25a5" Jan 22 00:35:44.423570 containerd[1629]: time="2026-01-22T00:35:44.423449215Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 22 00:35:44.561553 containerd[1629]: time="2026-01-22T00:35:44.561442585Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:35:44.563230 containerd[1629]: time="2026-01-22T00:35:44.563150577Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 22 00:35:44.563299 containerd[1629]: time="2026-01-22T00:35:44.563158848Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 22 00:35:44.563593 kubelet[2805]: E0122 00:35:44.563544 2805 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 22 00:35:44.563861 kubelet[2805]: E0122 00:35:44.563612 2805 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 22 00:35:44.563861 kubelet[2805]: E0122 00:35:44.563734 2805 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-74c55dd4c7-ljvfs_calico-system(c579fa7a-8ee2-4338-b81e-6fc1959a328f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 22 00:35:44.563861 kubelet[2805]: E0122 00:35:44.563779 2805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74c55dd4c7-ljvfs" podUID="c579fa7a-8ee2-4338-b81e-6fc1959a328f" Jan 22 00:35:47.424290 kubelet[2805]: E0122 00:35:47.424165 2805 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-687c765d98-jfjzf" podUID="8ecd20e5-2e31-4297-a10a-ea50808543e7" Jan 22 00:35:48.682791 kubelet[2805]: E0122 00:35:48.682725 2805 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 22 00:35:50.425898 kubelet[2805]: E0122 00:35:50.425829 2805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-bbb7b878c-c279j" podUID="ac5cf267-1601-4ae5-91e1-dc1496ea695f" Jan 22 00:35:50.430305 kubelet[2805]: E0122 00:35:50.426467 2805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-bbb7b878c-n8l2v" podUID="53fd5176-64d2-4a70-9167-8081a837fe6e" Jan 22 00:35:55.425808 kubelet[2805]: E0122 00:35:55.425577 2805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74c55dd4c7-ljvfs" podUID="c579fa7a-8ee2-4338-b81e-6fc1959a328f" Jan 22 00:35:55.425808 kubelet[2805]: E0122 00:35:55.425664 2805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-jv75z" podUID="7058f0b0-e750-4a4f-832e-cf58713e25a5" Jan 22 00:35:55.427303 kubelet[2805]: E0122 00:35:55.427208 2805 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wm244" podUID="ae45d206-785e-4662-9efc-4b0987941483" Jan 22 00:35:59.422920 kubelet[2805]: E0122 00:35:59.422878 2805 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 22 00:36:02.423771 containerd[1629]: time="2026-01-22T00:36:02.423721312Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 22 00:36:02.570378 containerd[1629]: time="2026-01-22T00:36:02.570331793Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:36:02.571714 containerd[1629]: time="2026-01-22T00:36:02.571632222Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 22 00:36:02.571714 containerd[1629]: time="2026-01-22T00:36:02.571663512Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 22 00:36:02.574602 kubelet[2805]: E0122 00:36:02.574567 2805 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 22 00:36:02.575049 kubelet[2805]: E0122 00:36:02.574617 2805 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 22 00:36:02.575049 kubelet[2805]: E0122 00:36:02.574732 2805 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-687c765d98-jfjzf_calico-system(8ecd20e5-2e31-4297-a10a-ea50808543e7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 22 00:36:02.576588 containerd[1629]: time="2026-01-22T00:36:02.576572346Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 22 00:36:02.706840 containerd[1629]: time="2026-01-22T00:36:02.706770287Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:36:02.707931 containerd[1629]: time="2026-01-22T00:36:02.707895356Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 22 00:36:02.708018 containerd[1629]: time="2026-01-22T00:36:02.707994326Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 22 00:36:02.708456 kubelet[2805]: E0122 00:36:02.708198 2805 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 22 00:36:02.708456 kubelet[2805]: E0122 00:36:02.708245 2805 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 22 00:36:02.708456 kubelet[2805]: E0122 00:36:02.708341 2805 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-687c765d98-jfjzf_calico-system(8ecd20e5-2e31-4297-a10a-ea50808543e7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 22 00:36:02.708456 kubelet[2805]: E0122 00:36:02.708394 2805 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-687c765d98-jfjzf" podUID="8ecd20e5-2e31-4297-a10a-ea50808543e7" Jan 22 00:36:04.423150 containerd[1629]: time="2026-01-22T00:36:04.422881390Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 22 00:36:04.550536 containerd[1629]: time="2026-01-22T00:36:04.550366611Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:36:04.552557 containerd[1629]: time="2026-01-22T00:36:04.551916823Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 22 00:36:04.552557 containerd[1629]: time="2026-01-22T00:36:04.551987093Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 22 00:36:04.552829 kubelet[2805]: E0122 00:36:04.552699 2805 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 22 00:36:04.552829 kubelet[2805]: E0122 00:36:04.552738 2805 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 22 00:36:04.553799 kubelet[2805]: E0122 00:36:04.552852 2805 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-bbb7b878c-n8l2v_calico-apiserver(53fd5176-64d2-4a70-9167-8081a837fe6e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 22 00:36:04.553799 kubelet[2805]: E0122 00:36:04.552880 2805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-bbb7b878c-n8l2v" podUID="53fd5176-64d2-4a70-9167-8081a837fe6e" Jan 22 00:36:05.429374 containerd[1629]: time="2026-01-22T00:36:05.429330111Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 22 00:36:05.575961 containerd[1629]: time="2026-01-22T00:36:05.575913397Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:36:05.577048 containerd[1629]: time="2026-01-22T00:36:05.577022819Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 22 00:36:05.577087 containerd[1629]: time="2026-01-22T00:36:05.577080339Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 22 00:36:05.577204 kubelet[2805]: E0122 00:36:05.577176 2805 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 22 00:36:05.577555 kubelet[2805]: E0122 00:36:05.577212 2805 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 22 00:36:05.577555 kubelet[2805]: E0122 00:36:05.577273 2805 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-bbb7b878c-c279j_calico-apiserver(ac5cf267-1601-4ae5-91e1-dc1496ea695f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 22 00:36:05.577555 kubelet[2805]: E0122 00:36:05.577300 2805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-bbb7b878c-c279j" podUID="ac5cf267-1601-4ae5-91e1-dc1496ea695f" Jan 22 00:36:10.427233 containerd[1629]: time="2026-01-22T00:36:10.427069057Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 22 00:36:10.848225 containerd[1629]: time="2026-01-22T00:36:10.848159854Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:36:10.850046 containerd[1629]: time="2026-01-22T00:36:10.849809597Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 22 00:36:10.850046 containerd[1629]: time="2026-01-22T00:36:10.849907947Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 22 00:36:10.850623 kubelet[2805]: E0122 00:36:10.850482 2805 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 22 00:36:10.851482 kubelet[2805]: E0122 00:36:10.851194 2805 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 22 00:36:10.851482 kubelet[2805]: E0122 00:36:10.851457 2805 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-jv75z_calico-system(7058f0b0-e750-4a4f-832e-cf58713e25a5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 22 00:36:10.852168 containerd[1629]: time="2026-01-22T00:36:10.851828552Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 22 00:36:10.852356 kubelet[2805]: E0122 00:36:10.852284 2805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-jv75z" podUID="7058f0b0-e750-4a4f-832e-cf58713e25a5" Jan 22 00:36:10.982978 containerd[1629]: time="2026-01-22T00:36:10.982906255Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:36:10.983928 containerd[1629]: time="2026-01-22T00:36:10.983878663Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 22 00:36:10.984003 containerd[1629]: time="2026-01-22T00:36:10.983983904Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 22 00:36:10.984261 kubelet[2805]: E0122 00:36:10.984189 2805 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 22 00:36:10.984326 kubelet[2805]: E0122 00:36:10.984262 2805 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 22 00:36:10.984550 kubelet[2805]: E0122 00:36:10.984492 2805 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-74c55dd4c7-ljvfs_calico-system(c579fa7a-8ee2-4338-b81e-6fc1959a328f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 22 00:36:10.985029 containerd[1629]: time="2026-01-22T00:36:10.984817059Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 22 00:36:10.985090 kubelet[2805]: E0122 00:36:10.984978 2805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74c55dd4c7-ljvfs" podUID="c579fa7a-8ee2-4338-b81e-6fc1959a328f" Jan 22 00:36:11.140833 containerd[1629]: time="2026-01-22T00:36:11.140655304Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:36:11.143303 containerd[1629]: time="2026-01-22T00:36:11.143129715Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 22 00:36:11.143303 containerd[1629]: time="2026-01-22T00:36:11.143165015Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 22 00:36:11.143971 kubelet[2805]: E0122 00:36:11.143913 2805 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 22 00:36:11.144022 kubelet[2805]: E0122 00:36:11.143979 2805 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 22 00:36:11.144109 kubelet[2805]: E0122 00:36:11.144067 2805 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-wm244_calico-system(ae45d206-785e-4662-9efc-4b0987941483): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 22 00:36:11.146152 containerd[1629]: time="2026-01-22T00:36:11.146107560Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 22 00:36:11.280903 containerd[1629]: time="2026-01-22T00:36:11.280800404Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:36:11.281980 containerd[1629]: time="2026-01-22T00:36:11.281911264Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 22 00:36:11.282076 containerd[1629]: time="2026-01-22T00:36:11.282034705Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 22 00:36:11.282286 kubelet[2805]: E0122 00:36:11.282251 2805 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 22 00:36:11.283820 kubelet[2805]: E0122 00:36:11.282597 2805 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 22 00:36:11.283820 kubelet[2805]: E0122 00:36:11.282700 2805 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-wm244_calico-system(ae45d206-785e-4662-9efc-4b0987941483): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 22 00:36:11.283953 kubelet[2805]: E0122 00:36:11.283920 2805 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wm244" podUID="ae45d206-785e-4662-9efc-4b0987941483" Jan 22 00:36:17.428756 kubelet[2805]: E0122 00:36:17.428704 2805 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-687c765d98-jfjzf" podUID="8ecd20e5-2e31-4297-a10a-ea50808543e7" Jan 22 00:36:18.422036 kubelet[2805]: E0122 00:36:18.421904 2805 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 22 00:36:18.423397 kubelet[2805]: E0122 00:36:18.423348 2805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-bbb7b878c-c279j" podUID="ac5cf267-1601-4ae5-91e1-dc1496ea695f" Jan 22 00:36:19.446248 kubelet[2805]: E0122 00:36:19.446193 2805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-bbb7b878c-n8l2v" podUID="53fd5176-64d2-4a70-9167-8081a837fe6e" Jan 22 00:36:20.422539 kubelet[2805]: E0122 00:36:20.422242 2805 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 22 00:36:21.424060 kubelet[2805]: E0122 00:36:21.423171 2805 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 22 00:36:22.422891 kubelet[2805]: E0122 00:36:22.422826 2805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-jv75z" podUID="7058f0b0-e750-4a4f-832e-cf58713e25a5" Jan 22 00:36:23.427561 kubelet[2805]: E0122 00:36:23.427067 2805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74c55dd4c7-ljvfs" podUID="c579fa7a-8ee2-4338-b81e-6fc1959a328f" Jan 22 00:36:24.427788 kubelet[2805]: E0122 00:36:24.427422 2805 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wm244" podUID="ae45d206-785e-4662-9efc-4b0987941483" Jan 22 00:36:30.424699 kubelet[2805]: E0122 00:36:30.424652 2805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-bbb7b878c-c279j" podUID="ac5cf267-1601-4ae5-91e1-dc1496ea695f" Jan 22 00:36:31.429536 kubelet[2805]: E0122 00:36:31.428867 2805 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-687c765d98-jfjzf" podUID="8ecd20e5-2e31-4297-a10a-ea50808543e7" Jan 22 00:36:33.423054 kubelet[2805]: E0122 00:36:33.422492 2805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-jv75z" podUID="7058f0b0-e750-4a4f-832e-cf58713e25a5" Jan 22 00:36:33.423054 kubelet[2805]: E0122 00:36:33.423015 2805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-bbb7b878c-n8l2v" podUID="53fd5176-64d2-4a70-9167-8081a837fe6e" Jan 22 00:36:38.424409 kubelet[2805]: E0122 00:36:38.423922 2805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74c55dd4c7-ljvfs" podUID="c579fa7a-8ee2-4338-b81e-6fc1959a328f" Jan 22 00:36:39.428358 kubelet[2805]: E0122 00:36:39.428281 2805 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 22 00:36:39.430496 kubelet[2805]: E0122 00:36:39.430308 2805 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wm244" podUID="ae45d206-785e-4662-9efc-4b0987941483" Jan 22 00:36:42.423142 kubelet[2805]: E0122 00:36:42.423008 2805 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-687c765d98-jfjzf" podUID="8ecd20e5-2e31-4297-a10a-ea50808543e7" Jan 22 00:36:44.424998 kubelet[2805]: E0122 00:36:44.424914 2805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-bbb7b878c-n8l2v" podUID="53fd5176-64d2-4a70-9167-8081a837fe6e" Jan 22 00:36:45.423676 kubelet[2805]: E0122 00:36:45.423614 2805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-bbb7b878c-c279j" podUID="ac5cf267-1601-4ae5-91e1-dc1496ea695f" Jan 22 00:36:48.422931 kubelet[2805]: E0122 00:36:48.422864 2805 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 22 00:36:48.426734 kubelet[2805]: E0122 00:36:48.426577 2805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-jv75z" podUID="7058f0b0-e750-4a4f-832e-cf58713e25a5" Jan 22 00:36:49.182721 kernel: hrtimer: interrupt took 4263970 ns Jan 22 00:36:50.424293 kubelet[2805]: E0122 00:36:50.424233 2805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74c55dd4c7-ljvfs" podUID="c579fa7a-8ee2-4338-b81e-6fc1959a328f" Jan 22 00:36:50.427194 kubelet[2805]: E0122 00:36:50.424658 2805 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wm244" podUID="ae45d206-785e-4662-9efc-4b0987941483" Jan 22 00:36:52.421545 kubelet[2805]: E0122 00:36:52.421391 2805 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 22 00:36:54.422533 kubelet[2805]: E0122 00:36:54.422430 2805 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 22 00:36:55.445365 containerd[1629]: time="2026-01-22T00:36:55.443379120Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 22 00:36:55.583019 containerd[1629]: time="2026-01-22T00:36:55.582880195Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:36:55.584236 containerd[1629]: time="2026-01-22T00:36:55.584170372Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 22 00:36:55.584548 containerd[1629]: time="2026-01-22T00:36:55.584336367Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 22 00:36:55.584716 kubelet[2805]: E0122 00:36:55.584678 2805 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 22 00:36:55.585436 kubelet[2805]: E0122 00:36:55.584725 2805 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 22 00:36:55.585436 kubelet[2805]: E0122 00:36:55.584800 2805 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-bbb7b878c-n8l2v_calico-apiserver(53fd5176-64d2-4a70-9167-8081a837fe6e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 22 00:36:55.585436 kubelet[2805]: E0122 00:36:55.584828 2805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-bbb7b878c-n8l2v" podUID="53fd5176-64d2-4a70-9167-8081a837fe6e" Jan 22 00:36:56.424569 containerd[1629]: time="2026-01-22T00:36:56.424527062Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 22 00:36:56.562290 containerd[1629]: time="2026-01-22T00:36:56.562223093Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:36:56.563564 containerd[1629]: time="2026-01-22T00:36:56.563529942Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 22 00:36:56.563618 containerd[1629]: time="2026-01-22T00:36:56.563576374Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 22 00:36:56.563929 kubelet[2805]: E0122 00:36:56.563870 2805 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 22 00:36:56.563929 kubelet[2805]: E0122 00:36:56.563914 2805 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 22 00:36:56.564537 kubelet[2805]: E0122 00:36:56.563999 2805 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-687c765d98-jfjzf_calico-system(8ecd20e5-2e31-4297-a10a-ea50808543e7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 22 00:36:56.566662 containerd[1629]: time="2026-01-22T00:36:56.565878383Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 22 00:36:56.703556 containerd[1629]: time="2026-01-22T00:36:56.703485811Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:36:56.704416 containerd[1629]: time="2026-01-22T00:36:56.704383708Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 22 00:36:56.704576 containerd[1629]: time="2026-01-22T00:36:56.704452770Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 22 00:36:56.704702 kubelet[2805]: E0122 00:36:56.704660 2805 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 22 00:36:56.704702 kubelet[2805]: E0122 00:36:56.704708 2805 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 22 00:36:56.705347 kubelet[2805]: E0122 00:36:56.704780 2805 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-687c765d98-jfjzf_calico-system(8ecd20e5-2e31-4297-a10a-ea50808543e7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 22 00:36:56.705347 kubelet[2805]: E0122 00:36:56.704818 2805 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-687c765d98-jfjzf" podUID="8ecd20e5-2e31-4297-a10a-ea50808543e7" Jan 22 00:36:58.423309 containerd[1629]: time="2026-01-22T00:36:58.423248224Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 22 00:36:58.556486 containerd[1629]: time="2026-01-22T00:36:58.556435476Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:36:58.557494 containerd[1629]: time="2026-01-22T00:36:58.557459928Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 22 00:36:58.557571 containerd[1629]: time="2026-01-22T00:36:58.557552071Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 22 00:36:58.557796 kubelet[2805]: E0122 00:36:58.557741 2805 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 22 00:36:58.558205 kubelet[2805]: E0122 00:36:58.557814 2805 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 22 00:36:58.558205 kubelet[2805]: E0122 00:36:58.557913 2805 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-bbb7b878c-c279j_calico-apiserver(ac5cf267-1601-4ae5-91e1-dc1496ea695f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 22 00:36:58.558205 kubelet[2805]: E0122 00:36:58.557943 2805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-bbb7b878c-c279j" podUID="ac5cf267-1601-4ae5-91e1-dc1496ea695f" Jan 22 00:36:59.424005 containerd[1629]: time="2026-01-22T00:36:59.423955980Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 22 00:36:59.557685 containerd[1629]: time="2026-01-22T00:36:59.557607082Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:36:59.558737 containerd[1629]: time="2026-01-22T00:36:59.558702676Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 22 00:36:59.558799 containerd[1629]: time="2026-01-22T00:36:59.558775259Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 22 00:36:59.559049 kubelet[2805]: E0122 00:36:59.558990 2805 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 22 00:36:59.559870 kubelet[2805]: E0122 00:36:59.559036 2805 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 22 00:36:59.559870 kubelet[2805]: E0122 00:36:59.559180 2805 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-jv75z_calico-system(7058f0b0-e750-4a4f-832e-cf58713e25a5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 22 00:36:59.559870 kubelet[2805]: E0122 00:36:59.559213 2805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-jv75z" podUID="7058f0b0-e750-4a4f-832e-cf58713e25a5" Jan 22 00:37:01.424563 containerd[1629]: time="2026-01-22T00:37:01.424472911Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 22 00:37:01.773757 containerd[1629]: time="2026-01-22T00:37:01.773688713Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:37:01.774823 containerd[1629]: time="2026-01-22T00:37:01.774747125Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 22 00:37:01.774971 containerd[1629]: time="2026-01-22T00:37:01.774852138Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 22 00:37:01.775095 kubelet[2805]: E0122 00:37:01.775022 2805 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 22 00:37:01.775095 kubelet[2805]: E0122 00:37:01.775073 2805 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 22 00:37:01.775464 kubelet[2805]: E0122 00:37:01.775235 2805 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-74c55dd4c7-ljvfs_calico-system(c579fa7a-8ee2-4338-b81e-6fc1959a328f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 22 00:37:01.775503 kubelet[2805]: E0122 00:37:01.775463 2805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74c55dd4c7-ljvfs" podUID="c579fa7a-8ee2-4338-b81e-6fc1959a328f" Jan 22 00:37:02.680000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.232.4.171:22-20.161.92.111:33618 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:37:02.681269 systemd[1]: Started sshd@7-172.232.4.171:22-20.161.92.111:33618.service - OpenSSH per-connection server daemon (20.161.92.111:33618). Jan 22 00:37:02.692189 kernel: kauditd_printk_skb: 194 callbacks suppressed Jan 22 00:37:02.692331 kernel: audit: type=1130 audit(1769042222.680:747): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.232.4.171:22-20.161.92.111:33618 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:37:02.851000 audit[5197]: USER_ACCT pid=5197 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:02.852660 sshd[5197]: Accepted publickey for core from 20.161.92.111 port 33618 ssh2: RSA SHA256:Scpzv+CdshzsDE47WY8VrFlF7F5SiGsSfXbWwUnoTgA Jan 22 00:37:02.855083 sshd-session[5197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:37:02.860720 kernel: audit: type=1101 audit(1769042222.851:748): pid=5197 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:02.853000 audit[5197]: CRED_ACQ pid=5197 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:02.870350 kernel: audit: type=1103 audit(1769042222.853:749): pid=5197 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:02.870423 kernel: audit: type=1006 audit(1769042222.853:750): pid=5197 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=8 res=1 Jan 22 00:37:02.853000 audit[5197]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffee7d34720 a2=3 a3=0 items=0 ppid=1 pid=5197 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:37:02.875673 kernel: audit: type=1300 audit(1769042222.853:750): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffee7d34720 a2=3 a3=0 items=0 ppid=1 pid=5197 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:37:02.882612 kernel: audit: type=1327 audit(1769042222.853:750): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:37:02.853000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:37:02.889161 systemd-logind[1596]: New session 8 of user core. Jan 22 00:37:02.892693 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 22 00:37:02.898000 audit[5197]: USER_START pid=5197 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:02.907571 kernel: audit: type=1105 audit(1769042222.898:751): pid=5197 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:02.908000 audit[5200]: CRED_ACQ pid=5200 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:02.917566 kernel: audit: type=1103 audit(1769042222.908:752): pid=5200 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:03.068367 sshd[5200]: Connection closed by 20.161.92.111 port 33618 Jan 22 00:37:03.069255 sshd-session[5197]: pam_unix(sshd:session): session closed for user core Jan 22 00:37:03.071000 audit[5197]: USER_END pid=5197 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:03.075789 systemd[1]: sshd@7-172.232.4.171:22-20.161.92.111:33618.service: Deactivated successfully. Jan 22 00:37:03.079542 systemd[1]: session-8.scope: Deactivated successfully. Jan 22 00:37:03.071000 audit[5197]: CRED_DISP pid=5197 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:03.082676 kernel: audit: type=1106 audit(1769042223.071:753): pid=5197 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:03.082732 kernel: audit: type=1104 audit(1769042223.071:754): pid=5197 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:03.082585 systemd-logind[1596]: Session 8 logged out. Waiting for processes to exit. Jan 22 00:37:03.084021 systemd-logind[1596]: Removed session 8. Jan 22 00:37:03.073000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.232.4.171:22-20.161.92.111:33618 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:37:03.423755 containerd[1629]: time="2026-01-22T00:37:03.422767314Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 22 00:37:03.564326 containerd[1629]: time="2026-01-22T00:37:03.564274371Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:37:03.565644 containerd[1629]: time="2026-01-22T00:37:03.565586332Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 22 00:37:03.567877 containerd[1629]: time="2026-01-22T00:37:03.565687655Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 22 00:37:03.567952 kubelet[2805]: E0122 00:37:03.567651 2805 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 22 00:37:03.567952 kubelet[2805]: E0122 00:37:03.567689 2805 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 22 00:37:03.567952 kubelet[2805]: E0122 00:37:03.567757 2805 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-wm244_calico-system(ae45d206-785e-4662-9efc-4b0987941483): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 22 00:37:03.569743 containerd[1629]: time="2026-01-22T00:37:03.569373821Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 22 00:37:03.707960 containerd[1629]: time="2026-01-22T00:37:03.707791871Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:37:03.708945 containerd[1629]: time="2026-01-22T00:37:03.708844715Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 22 00:37:03.708945 containerd[1629]: time="2026-01-22T00:37:03.708921587Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 22 00:37:03.709227 kubelet[2805]: E0122 00:37:03.709181 2805 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 22 00:37:03.709319 kubelet[2805]: E0122 00:37:03.709299 2805 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 22 00:37:03.709471 kubelet[2805]: E0122 00:37:03.709453 2805 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-wm244_calico-system(ae45d206-785e-4662-9efc-4b0987941483): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 22 00:37:03.709832 kubelet[2805]: E0122 00:37:03.709791 2805 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wm244" podUID="ae45d206-785e-4662-9efc-4b0987941483" Jan 22 00:37:07.425351 kubelet[2805]: E0122 00:37:07.425301 2805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-bbb7b878c-n8l2v" podUID="53fd5176-64d2-4a70-9167-8081a837fe6e" Jan 22 00:37:08.109825 systemd[1]: Started sshd@8-172.232.4.171:22-20.161.92.111:33626.service - OpenSSH per-connection server daemon (20.161.92.111:33626). Jan 22 00:37:08.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.232.4.171:22-20.161.92.111:33626 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:37:08.118555 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 22 00:37:08.118618 kernel: audit: type=1130 audit(1769042228.109:756): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.232.4.171:22-20.161.92.111:33626 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:37:08.286000 audit[5214]: USER_ACCT pid=5214 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:08.290198 sshd-session[5214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:37:08.290973 sshd[5214]: Accepted publickey for core from 20.161.92.111 port 33626 ssh2: RSA SHA256:Scpzv+CdshzsDE47WY8VrFlF7F5SiGsSfXbWwUnoTgA Jan 22 00:37:08.295733 kernel: audit: type=1101 audit(1769042228.286:757): pid=5214 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:08.289000 audit[5214]: CRED_ACQ pid=5214 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:08.303618 kernel: audit: type=1103 audit(1769042228.289:758): pid=5214 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:08.303968 systemd-logind[1596]: New session 9 of user core. Jan 22 00:37:08.308542 kernel: audit: type=1006 audit(1769042228.289:759): pid=5214 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Jan 22 00:37:08.289000 audit[5214]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd2f28cab0 a2=3 a3=0 items=0 ppid=1 pid=5214 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:37:08.289000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:37:08.339618 kernel: audit: type=1300 audit(1769042228.289:759): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd2f28cab0 a2=3 a3=0 items=0 ppid=1 pid=5214 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:37:08.339718 kernel: audit: type=1327 audit(1769042228.289:759): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:37:08.340937 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 22 00:37:08.349000 audit[5214]: USER_START pid=5214 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:08.359560 kernel: audit: type=1105 audit(1769042228.349:760): pid=5214 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:08.359000 audit[5217]: CRED_ACQ pid=5217 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:08.367544 kernel: audit: type=1103 audit(1769042228.359:761): pid=5217 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:08.514116 sshd[5217]: Connection closed by 20.161.92.111 port 33626 Jan 22 00:37:08.516605 sshd-session[5214]: pam_unix(sshd:session): session closed for user core Jan 22 00:37:08.517000 audit[5214]: USER_END pid=5214 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:08.526494 systemd[1]: sshd@8-172.232.4.171:22-20.161.92.111:33626.service: Deactivated successfully. Jan 22 00:37:08.527798 kernel: audit: type=1106 audit(1769042228.517:762): pid=5214 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:08.531211 systemd[1]: session-9.scope: Deactivated successfully. Jan 22 00:37:08.533399 systemd-logind[1596]: Session 9 logged out. Waiting for processes to exit. Jan 22 00:37:08.535079 systemd-logind[1596]: Removed session 9. Jan 22 00:37:08.518000 audit[5214]: CRED_DISP pid=5214 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:08.542557 kernel: audit: type=1104 audit(1769042228.518:763): pid=5214 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:08.520000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.232.4.171:22-20.161.92.111:33626 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:37:09.424478 kubelet[2805]: E0122 00:37:09.424403 2805 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-687c765d98-jfjzf" podUID="8ecd20e5-2e31-4297-a10a-ea50808543e7" Jan 22 00:37:09.425712 kubelet[2805]: E0122 00:37:09.424756 2805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-bbb7b878c-c279j" podUID="ac5cf267-1601-4ae5-91e1-dc1496ea695f" Jan 22 00:37:10.425688 kubelet[2805]: E0122 00:37:10.425644 2805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-jv75z" podUID="7058f0b0-e750-4a4f-832e-cf58713e25a5" Jan 22 00:37:13.550847 systemd[1]: Started sshd@9-172.232.4.171:22-20.161.92.111:60076.service - OpenSSH per-connection server daemon (20.161.92.111:60076). Jan 22 00:37:13.558268 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 22 00:37:13.558331 kernel: audit: type=1130 audit(1769042233.549:765): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.232.4.171:22-20.161.92.111:60076 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:37:13.549000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.232.4.171:22-20.161.92.111:60076 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:37:13.728117 sshd[5251]: Accepted publickey for core from 20.161.92.111 port 60076 ssh2: RSA SHA256:Scpzv+CdshzsDE47WY8VrFlF7F5SiGsSfXbWwUnoTgA Jan 22 00:37:13.726000 audit[5251]: USER_ACCT pid=5251 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:13.736588 kernel: audit: type=1101 audit(1769042233.726:766): pid=5251 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:13.737784 sshd-session[5251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:37:13.735000 audit[5251]: CRED_ACQ pid=5251 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:13.749616 kernel: audit: type=1103 audit(1769042233.735:767): pid=5251 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:13.756552 kernel: audit: type=1006 audit(1769042233.735:768): pid=5251 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jan 22 00:37:13.755999 systemd-logind[1596]: New session 10 of user core. Jan 22 00:37:13.735000 audit[5251]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff241fd2c0 a2=3 a3=0 items=0 ppid=1 pid=5251 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:37:13.765577 kernel: audit: type=1300 audit(1769042233.735:768): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff241fd2c0 a2=3 a3=0 items=0 ppid=1 pid=5251 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:37:13.765761 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 22 00:37:13.735000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:37:13.770572 kernel: audit: type=1327 audit(1769042233.735:768): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:37:13.770000 audit[5251]: USER_START pid=5251 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:13.780542 kernel: audit: type=1105 audit(1769042233.770:769): pid=5251 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:13.781000 audit[5254]: CRED_ACQ pid=5254 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:13.790536 kernel: audit: type=1103 audit(1769042233.781:770): pid=5254 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:13.946489 sshd[5254]: Connection closed by 20.161.92.111 port 60076 Jan 22 00:37:13.948721 sshd-session[5251]: pam_unix(sshd:session): session closed for user core Jan 22 00:37:13.949000 audit[5251]: USER_END pid=5251 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:13.959541 kernel: audit: type=1106 audit(1769042233.949:771): pid=5251 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:13.958000 audit[5251]: CRED_DISP pid=5251 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:13.962333 systemd[1]: sshd@9-172.232.4.171:22-20.161.92.111:60076.service: Deactivated successfully. Jan 22 00:37:13.967181 systemd[1]: session-10.scope: Deactivated successfully. Jan 22 00:37:13.968549 kernel: audit: type=1104 audit(1769042233.958:772): pid=5251 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:13.960000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.232.4.171:22-20.161.92.111:60076 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:37:13.972617 systemd-logind[1596]: Session 10 logged out. Waiting for processes to exit. Jan 22 00:37:13.986740 systemd[1]: Started sshd@10-172.232.4.171:22-20.161.92.111:60084.service - OpenSSH per-connection server daemon (20.161.92.111:60084). Jan 22 00:37:13.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.232.4.171:22-20.161.92.111:60084 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:37:13.991120 systemd-logind[1596]: Removed session 10. Jan 22 00:37:14.180000 audit[5266]: USER_ACCT pid=5266 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:14.183572 sshd[5266]: Accepted publickey for core from 20.161.92.111 port 60084 ssh2: RSA SHA256:Scpzv+CdshzsDE47WY8VrFlF7F5SiGsSfXbWwUnoTgA Jan 22 00:37:14.182000 audit[5266]: CRED_ACQ pid=5266 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:14.182000 audit[5266]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd27883230 a2=3 a3=0 items=0 ppid=1 pid=5266 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:37:14.182000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:37:14.184979 sshd-session[5266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:37:14.193355 systemd-logind[1596]: New session 11 of user core. Jan 22 00:37:14.199770 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 22 00:37:14.204000 audit[5266]: USER_START pid=5266 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:14.207000 audit[5273]: CRED_ACQ pid=5273 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:14.413419 sshd[5273]: Connection closed by 20.161.92.111 port 60084 Jan 22 00:37:14.416577 sshd-session[5266]: pam_unix(sshd:session): session closed for user core Jan 22 00:37:14.417000 audit[5266]: USER_END pid=5266 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:14.417000 audit[5266]: CRED_DISP pid=5266 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:14.422916 systemd[1]: sshd@10-172.232.4.171:22-20.161.92.111:60084.service: Deactivated successfully. Jan 22 00:37:14.421000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.232.4.171:22-20.161.92.111:60084 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:37:14.427633 systemd[1]: session-11.scope: Deactivated successfully. Jan 22 00:37:14.430843 systemd-logind[1596]: Session 11 logged out. Waiting for processes to exit. Jan 22 00:37:14.436719 systemd-logind[1596]: Removed session 11. Jan 22 00:37:14.450045 systemd[1]: Started sshd@11-172.232.4.171:22-20.161.92.111:60090.service - OpenSSH per-connection server daemon (20.161.92.111:60090). Jan 22 00:37:14.448000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.232.4.171:22-20.161.92.111:60090 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:37:14.614000 audit[5284]: USER_ACCT pid=5284 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:14.617457 sshd[5284]: Accepted publickey for core from 20.161.92.111 port 60090 ssh2: RSA SHA256:Scpzv+CdshzsDE47WY8VrFlF7F5SiGsSfXbWwUnoTgA Jan 22 00:37:14.617000 audit[5284]: CRED_ACQ pid=5284 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:14.617000 audit[5284]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc4a254790 a2=3 a3=0 items=0 ppid=1 pid=5284 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:37:14.617000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:37:14.619399 sshd-session[5284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:37:14.628681 systemd-logind[1596]: New session 12 of user core. Jan 22 00:37:14.632771 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 22 00:37:14.637000 audit[5284]: USER_START pid=5284 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:14.640000 audit[5287]: CRED_ACQ pid=5287 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:14.751222 sshd[5287]: Connection closed by 20.161.92.111 port 60090 Jan 22 00:37:14.751996 sshd-session[5284]: pam_unix(sshd:session): session closed for user core Jan 22 00:37:14.751000 audit[5284]: USER_END pid=5284 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:14.752000 audit[5284]: CRED_DISP pid=5284 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:14.756246 systemd[1]: sshd@11-172.232.4.171:22-20.161.92.111:60090.service: Deactivated successfully. Jan 22 00:37:14.755000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.232.4.171:22-20.161.92.111:60090 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:37:14.759107 systemd[1]: session-12.scope: Deactivated successfully. Jan 22 00:37:14.760627 systemd-logind[1596]: Session 12 logged out. Waiting for processes to exit. Jan 22 00:37:14.762825 systemd-logind[1596]: Removed session 12. Jan 22 00:37:15.423168 kubelet[2805]: E0122 00:37:15.423017 2805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74c55dd4c7-ljvfs" podUID="c579fa7a-8ee2-4338-b81e-6fc1959a328f" Jan 22 00:37:16.426563 kubelet[2805]: E0122 00:37:16.425981 2805 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wm244" podUID="ae45d206-785e-4662-9efc-4b0987941483" Jan 22 00:37:19.793046 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 22 00:37:19.793136 kernel: audit: type=1130 audit(1769042239.789:792): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.232.4.171:22-20.161.92.111:60102 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:37:19.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.232.4.171:22-20.161.92.111:60102 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:37:19.790764 systemd[1]: Started sshd@12-172.232.4.171:22-20.161.92.111:60102.service - OpenSSH per-connection server daemon (20.161.92.111:60102). Jan 22 00:37:19.955000 audit[5321]: USER_ACCT pid=5321 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:19.958192 sshd-session[5321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:37:19.959943 sshd[5321]: Accepted publickey for core from 20.161.92.111 port 60102 ssh2: RSA SHA256:Scpzv+CdshzsDE47WY8VrFlF7F5SiGsSfXbWwUnoTgA Jan 22 00:37:19.965006 kernel: audit: type=1101 audit(1769042239.955:793): pid=5321 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:19.956000 audit[5321]: CRED_ACQ pid=5321 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:19.968232 systemd-logind[1596]: New session 13 of user core. Jan 22 00:37:19.974577 kernel: audit: type=1103 audit(1769042239.956:794): pid=5321 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:19.956000 audit[5321]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffa88f7390 a2=3 a3=0 items=0 ppid=1 pid=5321 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:37:19.981449 kernel: audit: type=1006 audit(1769042239.956:795): pid=5321 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jan 22 00:37:19.981494 kernel: audit: type=1300 audit(1769042239.956:795): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffa88f7390 a2=3 a3=0 items=0 ppid=1 pid=5321 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:37:19.981798 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 22 00:37:19.956000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:37:19.987000 audit[5321]: USER_START pid=5321 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:19.992738 kernel: audit: type=1327 audit(1769042239.956:795): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:37:19.992785 kernel: audit: type=1105 audit(1769042239.987:796): pid=5321 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:19.999534 kernel: audit: type=1103 audit(1769042239.991:797): pid=5324 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:19.991000 audit[5324]: CRED_ACQ pid=5324 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:20.109607 sshd[5324]: Connection closed by 20.161.92.111 port 60102 Jan 22 00:37:20.110702 sshd-session[5321]: pam_unix(sshd:session): session closed for user core Jan 22 00:37:20.112000 audit[5321]: USER_END pid=5321 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:20.116840 systemd[1]: sshd@12-172.232.4.171:22-20.161.92.111:60102.service: Deactivated successfully. Jan 22 00:37:20.117191 systemd-logind[1596]: Session 13 logged out. Waiting for processes to exit. Jan 22 00:37:20.120228 systemd[1]: session-13.scope: Deactivated successfully. Jan 22 00:37:20.122828 kernel: audit: type=1106 audit(1769042240.112:798): pid=5321 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:20.125009 systemd-logind[1596]: Removed session 13. Jan 22 00:37:20.112000 audit[5321]: CRED_DISP pid=5321 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:20.114000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.232.4.171:22-20.161.92.111:60102 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:37:20.134578 kernel: audit: type=1104 audit(1769042240.112:799): pid=5321 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:20.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.232.4.171:22-20.161.92.111:60106 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:37:20.152021 systemd[1]: Started sshd@13-172.232.4.171:22-20.161.92.111:60106.service - OpenSSH per-connection server daemon (20.161.92.111:60106). Jan 22 00:37:20.334000 audit[5336]: USER_ACCT pid=5336 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:20.337403 sshd[5336]: Accepted publickey for core from 20.161.92.111 port 60106 ssh2: RSA SHA256:Scpzv+CdshzsDE47WY8VrFlF7F5SiGsSfXbWwUnoTgA Jan 22 00:37:20.336000 audit[5336]: CRED_ACQ pid=5336 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:20.336000 audit[5336]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc6788fa60 a2=3 a3=0 items=0 ppid=1 pid=5336 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:37:20.336000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:37:20.338615 sshd-session[5336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:37:20.345599 systemd-logind[1596]: New session 14 of user core. Jan 22 00:37:20.351672 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 22 00:37:20.355000 audit[5336]: USER_START pid=5336 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:20.359000 audit[5339]: CRED_ACQ pid=5339 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:20.428143 kubelet[2805]: E0122 00:37:20.426345 2805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-bbb7b878c-n8l2v" podUID="53fd5176-64d2-4a70-9167-8081a837fe6e" Jan 22 00:37:20.655541 sshd[5339]: Connection closed by 20.161.92.111 port 60106 Jan 22 00:37:20.652925 sshd-session[5336]: pam_unix(sshd:session): session closed for user core Jan 22 00:37:20.653000 audit[5336]: USER_END pid=5336 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:20.653000 audit[5336]: CRED_DISP pid=5336 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:20.658925 systemd[1]: sshd@13-172.232.4.171:22-20.161.92.111:60106.service: Deactivated successfully. Jan 22 00:37:20.657000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.232.4.171:22-20.161.92.111:60106 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:37:20.663946 systemd[1]: session-14.scope: Deactivated successfully. Jan 22 00:37:20.666424 systemd-logind[1596]: Session 14 logged out. Waiting for processes to exit. Jan 22 00:37:20.672849 systemd-logind[1596]: Removed session 14. Jan 22 00:37:20.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.232.4.171:22-20.161.92.111:60108 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:37:20.684902 systemd[1]: Started sshd@14-172.232.4.171:22-20.161.92.111:60108.service - OpenSSH per-connection server daemon (20.161.92.111:60108). Jan 22 00:37:20.867000 audit[5349]: USER_ACCT pid=5349 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:20.869671 sshd[5349]: Accepted publickey for core from 20.161.92.111 port 60108 ssh2: RSA SHA256:Scpzv+CdshzsDE47WY8VrFlF7F5SiGsSfXbWwUnoTgA Jan 22 00:37:20.868000 audit[5349]: CRED_ACQ pid=5349 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:20.869000 audit[5349]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffcd1bfb90 a2=3 a3=0 items=0 ppid=1 pid=5349 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:37:20.869000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:37:20.870983 sshd-session[5349]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:37:20.880290 systemd-logind[1596]: New session 15 of user core. Jan 22 00:37:20.883780 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 22 00:37:20.887000 audit[5349]: USER_START pid=5349 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:20.889000 audit[5352]: CRED_ACQ pid=5352 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:21.427541 kubelet[2805]: E0122 00:37:21.427423 2805 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 22 00:37:21.437221 kubelet[2805]: E0122 00:37:21.437149 2805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-jv75z" podUID="7058f0b0-e750-4a4f-832e-cf58713e25a5" Jan 22 00:37:21.468034 sshd[5352]: Connection closed by 20.161.92.111 port 60108 Jan 22 00:37:21.468435 sshd-session[5349]: pam_unix(sshd:session): session closed for user core Jan 22 00:37:21.470000 audit[5349]: USER_END pid=5349 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:21.471000 audit[5349]: CRED_DISP pid=5349 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:21.476626 systemd[1]: sshd@14-172.232.4.171:22-20.161.92.111:60108.service: Deactivated successfully. Jan 22 00:37:21.475000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.232.4.171:22-20.161.92.111:60108 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:37:21.479481 systemd[1]: session-15.scope: Deactivated successfully. Jan 22 00:37:21.481489 systemd-logind[1596]: Session 15 logged out. Waiting for processes to exit. Jan 22 00:37:21.483333 systemd-logind[1596]: Removed session 15. Jan 22 00:37:21.492000 audit[5367]: NETFILTER_CFG table=filter:131 family=2 entries=26 op=nft_register_rule pid=5367 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:37:21.492000 audit[5367]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffffaccefd0 a2=0 a3=7ffffaccefbc items=0 ppid=2914 pid=5367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:37:21.492000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:37:21.495000 audit[5367]: NETFILTER_CFG table=nat:132 family=2 entries=20 op=nft_register_rule pid=5367 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:37:21.495000 audit[5367]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffffaccefd0 a2=0 a3=0 items=0 ppid=2914 pid=5367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:37:21.495000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:37:21.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.232.4.171:22-20.161.92.111:60122 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:37:21.500143 systemd[1]: Started sshd@15-172.232.4.171:22-20.161.92.111:60122.service - OpenSSH per-connection server daemon (20.161.92.111:60122). Jan 22 00:37:21.655000 audit[5368]: USER_ACCT pid=5368 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:21.658566 sshd[5368]: Accepted publickey for core from 20.161.92.111 port 60122 ssh2: RSA SHA256:Scpzv+CdshzsDE47WY8VrFlF7F5SiGsSfXbWwUnoTgA Jan 22 00:37:21.657000 audit[5368]: CRED_ACQ pid=5368 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:21.657000 audit[5368]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdb322d1c0 a2=3 a3=0 items=0 ppid=1 pid=5368 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:37:21.657000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:37:21.659681 sshd-session[5368]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:37:21.670569 systemd-logind[1596]: New session 16 of user core. Jan 22 00:37:21.672684 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 22 00:37:21.679000 audit[5368]: USER_START pid=5368 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:21.683000 audit[5372]: CRED_ACQ pid=5372 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:21.912772 sshd[5372]: Connection closed by 20.161.92.111 port 60122 Jan 22 00:37:21.914151 sshd-session[5368]: pam_unix(sshd:session): session closed for user core Jan 22 00:37:21.915000 audit[5368]: USER_END pid=5368 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:21.915000 audit[5368]: CRED_DISP pid=5368 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:21.920208 systemd[1]: sshd@15-172.232.4.171:22-20.161.92.111:60122.service: Deactivated successfully. Jan 22 00:37:21.919000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.232.4.171:22-20.161.92.111:60122 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:37:21.920805 systemd-logind[1596]: Session 16 logged out. Waiting for processes to exit. Jan 22 00:37:21.925264 systemd[1]: session-16.scope: Deactivated successfully. Jan 22 00:37:21.930119 systemd-logind[1596]: Removed session 16. Jan 22 00:37:21.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.232.4.171:22-20.161.92.111:60124 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:37:21.946648 systemd[1]: Started sshd@16-172.232.4.171:22-20.161.92.111:60124.service - OpenSSH per-connection server daemon (20.161.92.111:60124). Jan 22 00:37:22.091000 audit[5381]: USER_ACCT pid=5381 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:22.092787 sshd[5381]: Accepted publickey for core from 20.161.92.111 port 60124 ssh2: RSA SHA256:Scpzv+CdshzsDE47WY8VrFlF7F5SiGsSfXbWwUnoTgA Jan 22 00:37:22.093000 audit[5381]: CRED_ACQ pid=5381 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:22.093000 audit[5381]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc7a328970 a2=3 a3=0 items=0 ppid=1 pid=5381 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:37:22.093000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:37:22.095739 sshd-session[5381]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:37:22.103937 systemd-logind[1596]: New session 17 of user core. Jan 22 00:37:22.109662 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 22 00:37:22.114000 audit[5381]: USER_START pid=5381 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:22.116000 audit[5384]: CRED_ACQ pid=5384 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:22.273429 sshd[5384]: Connection closed by 20.161.92.111 port 60124 Jan 22 00:37:22.275851 sshd-session[5381]: pam_unix(sshd:session): session closed for user core Jan 22 00:37:22.277000 audit[5381]: USER_END pid=5381 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:22.277000 audit[5381]: CRED_DISP pid=5381 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:22.282265 systemd[1]: sshd@16-172.232.4.171:22-20.161.92.111:60124.service: Deactivated successfully. Jan 22 00:37:22.282000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.232.4.171:22-20.161.92.111:60124 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:37:22.287475 systemd[1]: session-17.scope: Deactivated successfully. Jan 22 00:37:22.290054 systemd-logind[1596]: Session 17 logged out. Waiting for processes to exit. Jan 22 00:37:22.292959 systemd-logind[1596]: Removed session 17. Jan 22 00:37:22.511000 audit[5399]: NETFILTER_CFG table=filter:133 family=2 entries=38 op=nft_register_rule pid=5399 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:37:22.511000 audit[5399]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffe2c301530 a2=0 a3=7ffe2c30151c items=0 ppid=2914 pid=5399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:37:22.511000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:37:22.515000 audit[5399]: NETFILTER_CFG table=nat:134 family=2 entries=20 op=nft_register_rule pid=5399 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:37:22.515000 audit[5399]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe2c301530 a2=0 a3=0 items=0 ppid=2914 pid=5399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:37:22.515000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:37:23.425004 kubelet[2805]: E0122 00:37:23.424930 2805 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-687c765d98-jfjzf" podUID="8ecd20e5-2e31-4297-a10a-ea50808543e7" Jan 22 00:37:24.423074 kubelet[2805]: E0122 00:37:24.423015 2805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-bbb7b878c-c279j" podUID="ac5cf267-1601-4ae5-91e1-dc1496ea695f" Jan 22 00:37:25.423750 kubelet[2805]: E0122 00:37:25.423348 2805 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 22 00:37:26.423100 kubelet[2805]: E0122 00:37:26.423050 2805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74c55dd4c7-ljvfs" podUID="c579fa7a-8ee2-4338-b81e-6fc1959a328f" Jan 22 00:37:26.550617 kernel: kauditd_printk_skb: 57 callbacks suppressed Jan 22 00:37:26.550725 kernel: audit: type=1325 audit(1769042246.543:841): table=filter:135 family=2 entries=26 op=nft_register_rule pid=5402 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:37:26.543000 audit[5402]: NETFILTER_CFG table=filter:135 family=2 entries=26 op=nft_register_rule pid=5402 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:37:26.557871 kernel: audit: type=1300 audit(1769042246.543:841): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffeeeaeec00 a2=0 a3=7ffeeeaeebec items=0 ppid=2914 pid=5402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:37:26.543000 audit[5402]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffeeeaeec00 a2=0 a3=7ffeeeaeebec items=0 ppid=2914 pid=5402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:37:26.543000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:37:26.569585 kernel: audit: type=1327 audit(1769042246.543:841): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:37:26.569640 kernel: audit: type=1325 audit(1769042246.563:842): table=nat:136 family=2 entries=104 op=nft_register_chain pid=5402 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:37:26.563000 audit[5402]: NETFILTER_CFG table=nat:136 family=2 entries=104 op=nft_register_chain pid=5402 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:37:26.563000 audit[5402]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffeeeaeec00 a2=0 a3=7ffeeeaeebec items=0 ppid=2914 pid=5402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:37:26.573833 kernel: audit: type=1300 audit(1769042246.563:842): arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffeeeaeec00 a2=0 a3=7ffeeeaeebec items=0 ppid=2914 pid=5402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:37:26.563000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:37:26.581028 kernel: audit: type=1327 audit(1769042246.563:842): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:37:27.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.232.4.171:22-20.161.92.111:48666 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:37:27.308900 systemd[1]: Started sshd@17-172.232.4.171:22-20.161.92.111:48666.service - OpenSSH per-connection server daemon (20.161.92.111:48666). Jan 22 00:37:27.316631 kernel: audit: type=1130 audit(1769042247.308:843): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.232.4.171:22-20.161.92.111:48666 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:37:27.421795 kubelet[2805]: E0122 00:37:27.421759 2805 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 22 00:37:27.461000 audit[5404]: USER_ACCT pid=5404 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:27.462667 sshd[5404]: Accepted publickey for core from 20.161.92.111 port 48666 ssh2: RSA SHA256:Scpzv+CdshzsDE47WY8VrFlF7F5SiGsSfXbWwUnoTgA Jan 22 00:37:27.470546 kernel: audit: type=1101 audit(1769042247.461:844): pid=5404 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:27.470906 sshd-session[5404]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:37:27.469000 audit[5404]: CRED_ACQ pid=5404 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:27.479265 kernel: audit: type=1103 audit(1769042247.469:845): pid=5404 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:27.479314 kernel: audit: type=1006 audit(1769042247.469:846): pid=5404 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Jan 22 00:37:27.469000 audit[5404]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe0de9fbb0 a2=3 a3=0 items=0 ppid=1 pid=5404 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:37:27.469000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:37:27.486569 systemd-logind[1596]: New session 18 of user core. Jan 22 00:37:27.492827 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 22 00:37:27.497000 audit[5404]: USER_START pid=5404 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:27.499000 audit[5407]: CRED_ACQ pid=5407 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:27.609488 sshd[5407]: Connection closed by 20.161.92.111 port 48666 Jan 22 00:37:27.609964 sshd-session[5404]: pam_unix(sshd:session): session closed for user core Jan 22 00:37:27.612000 audit[5404]: USER_END pid=5404 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:27.612000 audit[5404]: CRED_DISP pid=5404 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:27.616412 systemd[1]: sshd@17-172.232.4.171:22-20.161.92.111:48666.service: Deactivated successfully. Jan 22 00:37:27.616943 systemd-logind[1596]: Session 18 logged out. Waiting for processes to exit. Jan 22 00:37:27.617000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.232.4.171:22-20.161.92.111:48666 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:37:27.620537 systemd[1]: session-18.scope: Deactivated successfully. Jan 22 00:37:27.623995 systemd-logind[1596]: Removed session 18. Jan 22 00:37:30.423761 kubelet[2805]: E0122 00:37:30.423722 2805 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wm244" podUID="ae45d206-785e-4662-9efc-4b0987941483" Jan 22 00:37:32.422790 kubelet[2805]: E0122 00:37:32.422732 2805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-jv75z" podUID="7058f0b0-e750-4a4f-832e-cf58713e25a5" Jan 22 00:37:32.652843 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 22 00:37:32.652928 kernel: audit: type=1130 audit(1769042252.645:852): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.232.4.171:22-20.161.92.111:43342 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:37:32.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.232.4.171:22-20.161.92.111:43342 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:37:32.645865 systemd[1]: Started sshd@18-172.232.4.171:22-20.161.92.111:43342.service - OpenSSH per-connection server daemon (20.161.92.111:43342). Jan 22 00:37:32.796000 audit[5420]: USER_ACCT pid=5420 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:32.799680 sshd-session[5420]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:37:32.800963 sshd[5420]: Accepted publickey for core from 20.161.92.111 port 43342 ssh2: RSA SHA256:Scpzv+CdshzsDE47WY8VrFlF7F5SiGsSfXbWwUnoTgA Jan 22 00:37:32.798000 audit[5420]: CRED_ACQ pid=5420 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:32.807012 kernel: audit: type=1101 audit(1769042252.796:853): pid=5420 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:32.807069 kernel: audit: type=1103 audit(1769042252.798:854): pid=5420 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:32.813955 kernel: audit: type=1006 audit(1769042252.798:855): pid=5420 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Jan 22 00:37:32.798000 audit[5420]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdfb4d0a30 a2=3 a3=0 items=0 ppid=1 pid=5420 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:37:32.826615 kernel: audit: type=1300 audit(1769042252.798:855): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdfb4d0a30 a2=3 a3=0 items=0 ppid=1 pid=5420 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:37:32.798000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:37:32.829978 systemd-logind[1596]: New session 19 of user core. Jan 22 00:37:32.830693 kernel: audit: type=1327 audit(1769042252.798:855): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:37:32.842673 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 22 00:37:32.845000 audit[5420]: USER_START pid=5420 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:32.847000 audit[5423]: CRED_ACQ pid=5423 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:32.856221 kernel: audit: type=1105 audit(1769042252.845:856): pid=5420 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:32.856269 kernel: audit: type=1103 audit(1769042252.847:857): pid=5423 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:32.952868 sshd[5423]: Connection closed by 20.161.92.111 port 43342 Jan 22 00:37:32.953570 sshd-session[5420]: pam_unix(sshd:session): session closed for user core Jan 22 00:37:32.955000 audit[5420]: USER_END pid=5420 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:32.957945 systemd-logind[1596]: Session 19 logged out. Waiting for processes to exit. Jan 22 00:37:32.960208 systemd[1]: sshd@18-172.232.4.171:22-20.161.92.111:43342.service: Deactivated successfully. Jan 22 00:37:32.962771 systemd[1]: session-19.scope: Deactivated successfully. Jan 22 00:37:32.964870 kernel: audit: type=1106 audit(1769042252.955:858): pid=5420 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:32.965732 systemd-logind[1596]: Removed session 19. Jan 22 00:37:32.955000 audit[5420]: CRED_DISP pid=5420 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:32.999578 kernel: audit: type=1104 audit(1769042252.955:859): pid=5420 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 22 00:37:32.960000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.232.4.171:22-20.161.92.111:43342 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'